Nov 25 19:27:26 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 19:27:26 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 19:27:26 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 19:27:26 localhost kernel: BIOS-provided physical RAM map:
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 19:27:26 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 19:27:26 localhost kernel: NX (Execute Disable) protection: active
Nov 25 19:27:26 localhost kernel: APIC: Static calls initialized
Nov 25 19:27:26 localhost kernel: SMBIOS 2.8 present.
Nov 25 19:27:26 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 19:27:26 localhost kernel: Hypervisor detected: KVM
Nov 25 19:27:26 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 19:27:26 localhost kernel: kvm-clock: using sched offset of 10083091929 cycles
Nov 25 19:27:26 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 19:27:26 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 25 19:27:26 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 19:27:26 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 19:27:26 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 19:27:26 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 19:27:26 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 19:27:26 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 19:27:26 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 19:27:26 localhost kernel: Using GB pages for direct mapping
Nov 25 19:27:26 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 19:27:26 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 19:27:26 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 19:27:26 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:27:26 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:27:26 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:27:26 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 19:27:26 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:27:26 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:27:26 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 19:27:26 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 19:27:26 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 19:27:26 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 19:27:26 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 19:27:26 localhost kernel: No NUMA configuration found
Nov 25 19:27:26 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 19:27:26 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 25 19:27:26 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 19:27:26 localhost kernel: Zone ranges:
Nov 25 19:27:26 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 19:27:26 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 19:27:26 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 19:27:26 localhost kernel:   Device   empty
Nov 25 19:27:26 localhost kernel: Movable zone start for each node
Nov 25 19:27:26 localhost kernel: Early memory node ranges
Nov 25 19:27:26 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 19:27:26 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 19:27:26 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 19:27:26 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 19:27:26 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 19:27:26 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 19:27:26 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 19:27:26 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 19:27:26 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 19:27:26 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 19:27:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 19:27:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 19:27:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 19:27:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 19:27:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 19:27:26 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 19:27:26 localhost kernel: TSC deadline timer available
Nov 25 19:27:26 localhost kernel: CPU topo: Max. logical packages:   8
Nov 25 19:27:26 localhost kernel: CPU topo: Max. logical dies:       8
Nov 25 19:27:26 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 19:27:26 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 19:27:26 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 19:27:26 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 19:27:26 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 19:27:26 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 19:27:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 19:27:26 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 19:27:26 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 19:27:26 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 19:27:26 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 19:27:26 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 19:27:26 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 25 19:27:26 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 25 19:27:26 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 19:27:26 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 19:27:26 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 19:27:26 localhost kernel: random: crng init done
Nov 25 19:27:26 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 19:27:26 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 19:27:26 localhost kernel: Fallback order for Node 0: 0 
Nov 25 19:27:26 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 19:27:26 localhost kernel: Policy zone: Normal
Nov 25 19:27:26 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 19:27:26 localhost kernel: software IO TLB: area num 8.
Nov 25 19:27:26 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 19:27:26 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 19:27:26 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 19:27:26 localhost kernel: Dynamic Preempt: voluntary
Nov 25 19:27:26 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 19:27:26 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 19:27:26 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 19:27:26 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 19:27:26 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 19:27:26 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 19:27:26 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 19:27:26 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 19:27:26 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 19:27:26 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 19:27:26 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 19:27:26 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 19:27:26 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 19:27:26 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 19:27:26 localhost kernel: Console: colour VGA+ 80x25
Nov 25 19:27:26 localhost kernel: printk: console [ttyS0] enabled
Nov 25 19:27:26 localhost kernel: ACPI: Core revision 20230331
Nov 25 19:27:26 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 19:27:26 localhost kernel: x2apic enabled
Nov 25 19:27:26 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 19:27:26 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 19:27:26 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 19:27:26 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 19:27:26 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 19:27:26 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 19:27:26 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 19:27:26 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 19:27:26 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 19:27:26 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 19:27:26 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 19:27:26 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 19:27:26 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 19:27:26 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 19:27:26 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 19:27:26 localhost kernel: x86/bugs: return thunk changed
Nov 25 19:27:26 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 19:27:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 19:27:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 19:27:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 19:27:26 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 19:27:26 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 19:27:26 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 19:27:26 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 19:27:26 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 19:27:26 localhost kernel: landlock: Up and running.
Nov 25 19:27:26 localhost kernel: Yama: becoming mindful.
Nov 25 19:27:26 localhost kernel: SELinux:  Initializing.
Nov 25 19:27:26 localhost kernel: LSM support for eBPF active
Nov 25 19:27:26 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 19:27:26 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 19:27:26 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 19:27:26 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 19:27:26 localhost kernel: ... version:                0
Nov 25 19:27:26 localhost kernel: ... bit width:              48
Nov 25 19:27:26 localhost kernel: ... generic registers:      6
Nov 25 19:27:26 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 19:27:26 localhost kernel: ... max period:             00007fffffffffff
Nov 25 19:27:26 localhost kernel: ... fixed-purpose events:   0
Nov 25 19:27:26 localhost kernel: ... event mask:             000000000000003f
Nov 25 19:27:26 localhost kernel: signal: max sigframe size: 1776
Nov 25 19:27:26 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 19:27:26 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 19:27:26 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 19:27:26 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 19:27:26 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 19:27:26 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 19:27:26 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 19:27:26 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 25 19:27:26 localhost kernel: Memory: 7776568K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605560K reserved, 0K cma-reserved)
Nov 25 19:27:26 localhost kernel: devtmpfs: initialized
Nov 25 19:27:26 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 19:27:26 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 19:27:26 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 19:27:26 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 19:27:26 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 19:27:26 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 19:27:26 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 19:27:26 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 19:27:26 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 19:27:26 localhost kernel: audit: type=2000 audit(1764098844.116:1): state=initialized audit_enabled=0 res=1
Nov 25 19:27:26 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 19:27:26 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 19:27:26 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 19:27:26 localhost kernel: cpuidle: using governor menu
Nov 25 19:27:26 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 19:27:26 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 19:27:26 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 25 19:27:26 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 19:27:26 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 19:27:26 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 19:27:26 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 19:27:26 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 19:27:26 localhost kernel: Demotion targets for Node 0: null
Nov 25 19:27:26 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 19:27:26 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 19:27:26 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 19:27:26 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 19:27:26 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 19:27:26 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 19:27:26 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 19:27:26 localhost kernel: ACPI: Interpreter enabled
Nov 25 19:27:26 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 19:27:26 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 19:27:26 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 19:27:26 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 19:27:26 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 19:27:26 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 19:27:26 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [3] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [4] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [5] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [6] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [7] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [8] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [9] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [10] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [11] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [12] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [13] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [14] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [15] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [16] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [17] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [18] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [19] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [20] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [21] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [22] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [23] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [24] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [25] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [26] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [27] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [28] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [29] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [30] registered
Nov 25 19:27:26 localhost kernel: acpiphp: Slot [31] registered
Nov 25 19:27:26 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 19:27:26 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 19:27:26 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 19:27:26 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 19:27:26 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 19:27:26 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 19:27:26 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 19:27:26 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 19:27:26 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 19:27:26 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 19:27:26 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 19:27:26 localhost kernel: iommu: Default domain type: Translated
Nov 25 19:27:26 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 19:27:26 localhost kernel: SCSI subsystem initialized
Nov 25 19:27:26 localhost kernel: ACPI: bus type USB registered
Nov 25 19:27:26 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 19:27:26 localhost kernel: usbcore: registered new interface driver hub
Nov 25 19:27:26 localhost kernel: usbcore: registered new device driver usb
Nov 25 19:27:26 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 19:27:26 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 19:27:26 localhost kernel: PTP clock support registered
Nov 25 19:27:26 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 19:27:26 localhost kernel: NetLabel: Initializing
Nov 25 19:27:26 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 19:27:26 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 19:27:26 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 19:27:26 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 19:27:26 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 19:27:26 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 19:27:26 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 19:27:26 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 19:27:26 localhost kernel: vgaarb: loaded
Nov 25 19:27:26 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 19:27:26 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 19:27:26 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 19:27:26 localhost kernel: pnp: PnP ACPI init
Nov 25 19:27:26 localhost kernel: pnp 00:03: [dma 2]
Nov 25 19:27:26 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 19:27:26 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 19:27:26 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 19:27:26 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 19:27:26 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 19:27:26 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 19:27:26 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 19:27:26 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 19:27:26 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 19:27:26 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 19:27:26 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 19:27:26 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 19:27:26 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 19:27:26 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 19:27:26 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 19:27:26 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 19:27:26 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 19:27:26 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 113689 usecs
Nov 25 19:27:26 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 19:27:26 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 19:27:26 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 19:27:26 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 19:27:26 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 19:27:26 localhost kernel: Initialise system trusted keyrings
Nov 25 19:27:26 localhost kernel: Key type blacklist registered
Nov 25 19:27:26 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 19:27:26 localhost kernel: zbud: loaded
Nov 25 19:27:26 localhost kernel: integrity: Platform Keyring initialized
Nov 25 19:27:26 localhost kernel: integrity: Machine keyring initialized
Nov 25 19:27:26 localhost kernel: Freeing initrd memory: 75160K
Nov 25 19:27:26 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 19:27:26 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 19:27:26 localhost kernel: Key type asymmetric registered
Nov 25 19:27:26 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 19:27:26 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 19:27:26 localhost kernel: io scheduler mq-deadline registered
Nov 25 19:27:26 localhost kernel: io scheduler kyber registered
Nov 25 19:27:26 localhost kernel: io scheduler bfq registered
Nov 25 19:27:26 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 19:27:26 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 19:27:26 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 19:27:26 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 19:27:26 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 19:27:26 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 19:27:26 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 19:27:26 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 19:27:26 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 19:27:26 localhost kernel: Non-volatile memory driver v1.3
Nov 25 19:27:26 localhost kernel: rdac: device handler registered
Nov 25 19:27:26 localhost kernel: hp_sw: device handler registered
Nov 25 19:27:26 localhost kernel: emc: device handler registered
Nov 25 19:27:26 localhost kernel: alua: device handler registered
Nov 25 19:27:26 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 19:27:26 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 19:27:26 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 19:27:26 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 19:27:26 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 19:27:26 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 19:27:26 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 19:27:26 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 19:27:26 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 19:27:26 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 19:27:26 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 19:27:26 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 19:27:26 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 19:27:26 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 19:27:26 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 19:27:26 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 19:27:26 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 19:27:26 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 19:27:26 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 19:27:26 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T19:27:25 UTC (1764098845)
Nov 25 19:27:26 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 19:27:26 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 19:27:26 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 19:27:26 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 19:27:26 localhost kernel: usbhid: USB HID core driver
Nov 25 19:27:26 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 19:27:26 localhost kernel: Initializing XFRM netlink socket
Nov 25 19:27:26 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 19:27:26 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 19:27:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 19:27:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 19:27:26 localhost kernel: Segment Routing with IPv6
Nov 25 19:27:26 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 19:27:26 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 19:27:26 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 19:27:26 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 19:27:26 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 19:27:26 localhost kernel: sched_clock: Marking stable (1264001698, 150661484)->(1503921036, -89257854)
Nov 25 19:27:26 localhost kernel: registered taskstats version 1
Nov 25 19:27:26 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 19:27:26 localhost kernel: Demotion targets for Node 0: null
Nov 25 19:27:26 localhost kernel: page_owner is disabled
Nov 25 19:27:26 localhost kernel: Key type .fscrypt registered
Nov 25 19:27:26 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 19:27:26 localhost kernel: Key type big_key registered
Nov 25 19:27:26 localhost kernel: Key type encrypted registered
Nov 25 19:27:26 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 19:27:26 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 19:27:26 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 19:27:26 localhost kernel: ima: No architecture policies found
Nov 25 19:27:26 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 19:27:26 localhost kernel: evm: security.selinux
Nov 25 19:27:26 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 19:27:26 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 19:27:26 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 19:27:26 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 19:27:26 localhost kernel: evm: security.apparmor (disabled)
Nov 25 19:27:26 localhost kernel: evm: security.ima
Nov 25 19:27:26 localhost kernel: evm: security.capability
Nov 25 19:27:26 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 19:27:26 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 19:27:26 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 19:27:26 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 19:27:26 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 19:27:26 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 19:27:26 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 19:27:26 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 19:27:26 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 19:27:26 localhost kernel: Running certificate verification RSA selftest
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 19:27:26 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 19:27:26 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 19:27:26 localhost kernel: clk: Disabling unused clocks
Nov 25 19:27:26 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 19:27:26 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 19:27:26 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 19:27:26 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 19:27:26 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 19:27:26 localhost kernel: Run /init as init process
Nov 25 19:27:26 localhost kernel:   with arguments:
Nov 25 19:27:26 localhost kernel:     /init
Nov 25 19:27:26 localhost kernel:   with environment:
Nov 25 19:27:26 localhost kernel:     HOME=/
Nov 25 19:27:26 localhost kernel:     TERM=linux
Nov 25 19:27:26 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 19:27:26 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 19:27:26 localhost systemd[1]: Detected virtualization kvm.
Nov 25 19:27:26 localhost systemd[1]: Detected architecture x86-64.
Nov 25 19:27:26 localhost systemd[1]: Running in initrd.
Nov 25 19:27:26 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 19:27:26 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 19:27:26 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 19:27:26 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 19:27:26 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 19:27:26 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 19:27:26 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 19:27:26 localhost systemd[1]: Reached target Local File Systems.
Nov 25 19:27:26 localhost systemd[1]: Reached target Path Units.
Nov 25 19:27:26 localhost systemd[1]: Reached target Slice Units.
Nov 25 19:27:26 localhost systemd[1]: Reached target Swaps.
Nov 25 19:27:26 localhost systemd[1]: Reached target Timer Units.
Nov 25 19:27:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 19:27:26 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 19:27:26 localhost systemd[1]: Listening on Journal Socket.
Nov 25 19:27:26 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 19:27:26 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 19:27:26 localhost systemd[1]: Reached target Socket Units.
Nov 25 19:27:26 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 19:27:26 localhost systemd[1]: Starting Journal Service...
Nov 25 19:27:26 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 19:27:26 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 19:27:26 localhost systemd[1]: Starting Create System Users...
Nov 25 19:27:26 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 19:27:26 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 19:27:26 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 19:27:26 localhost systemd[1]: Finished Create System Users.
Nov 25 19:27:26 localhost systemd-journald[306]: Journal started
Nov 25 19:27:26 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/ee007d1351734e648d3ec554c682b054) is 8.0M, max 153.6M, 145.6M free.
Nov 25 19:27:26 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 25 19:27:26 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 25 19:27:26 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 19:27:26 localhost systemd[1]: Started Journal Service.
Nov 25 19:27:26 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 19:27:26 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 19:27:26 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 19:27:26 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 19:27:26 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 19:27:26 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 19:27:26 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 19:27:26 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 19:27:26 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 19:27:26 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 19:27:26 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 19:27:26 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 19:27:26 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 19:27:26 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 19:27:27 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 19:27:27 localhost kernel: RPC: Registered udp transport module.
Nov 25 19:27:27 localhost kernel: RPC: Registered tcp transport module.
Nov 25 19:27:27 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 19:27:27 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 19:27:27 localhost rpc.statd[444]: Version 2.5.4 starting
Nov 25 19:27:27 localhost rpc.statd[444]: Initializing NSM state
Nov 25 19:27:27 localhost rpc.idmapd[449]: Setting log level to 0
Nov 25 19:27:27 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 19:27:27 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 19:27:27 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 19:27:27 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 19:27:27 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 19:27:27 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 19:27:27 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 19:27:27 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 19:27:27 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 19:27:27 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 19:27:27 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 19:27:27 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 19:27:27 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 19:27:27 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 19:27:27 localhost systemd[1]: Reached target Network.
Nov 25 19:27:27 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 19:27:27 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 19:27:27 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 19:27:27 localhost systemd[1]: Reached target System Initialization.
Nov 25 19:27:27 localhost systemd[1]: Reached target Basic System.
Nov 25 19:27:27 localhost kernel: libata version 3.00 loaded.
Nov 25 19:27:27 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 19:27:27 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 25 19:27:27 localhost kernel: scsi host0: ata_piix
Nov 25 19:27:27 localhost kernel: scsi host1: ata_piix
Nov 25 19:27:27 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 19:27:27 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 19:27:27 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 19:27:27 localhost kernel:  vda: vda1
Nov 25 19:27:27 localhost kernel: ata1: found unknown device (class 0)
Nov 25 19:27:27 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 19:27:27 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 19:27:27 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:27:27 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 19:27:27 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 19:27:27 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 19:27:27 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 19:27:27 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 19:27:27 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 19:27:28 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 19:27:28 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 19:27:28 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 19:27:28 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 19:27:28 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 19:27:28 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 19:27:28 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 19:27:28 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 19:27:28 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 19:27:28 localhost systemd[1]: Mounting /sysroot...
Nov 25 19:27:29 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 19:27:29 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 19:27:29 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 19:27:29 localhost systemd[1]: Mounted /sysroot.
Nov 25 19:27:29 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 19:27:29 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 19:27:29 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 19:27:29 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 19:27:29 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 19:27:29 localhost systemd[1]: Starting dracut mount hook...
Nov 25 19:27:29 localhost systemd[1]: Finished dracut mount hook.
Nov 25 19:27:29 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 19:27:29 localhost rpc.idmapd[449]: exiting on signal 15
Nov 25 19:27:29 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 19:27:29 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 19:27:29 localhost systemd[1]: Stopped target Network.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Timer Units.
Nov 25 19:27:29 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 19:27:29 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Basic System.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Path Units.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Slice Units.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Socket Units.
Nov 25 19:27:29 localhost systemd[1]: Stopped target System Initialization.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Swaps.
Nov 25 19:27:29 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 19:27:29 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 19:27:29 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 19:27:29 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 19:27:29 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 19:27:29 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 19:27:29 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 19:27:29 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 19:27:29 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 19:27:29 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 19:27:29 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 19:27:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 19:27:29 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 19:27:29 localhost systemd[1]: systemd-udevd.service: Consumed 1.093s CPU time.
Nov 25 19:27:29 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Closed udev Control Socket.
Nov 25 19:27:29 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 19:27:29 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 19:27:29 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 19:27:29 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 19:27:29 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 19:27:29 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 19:27:29 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Stopped Create System Users.
Nov 25 19:27:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 19:27:29 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 19:27:29 localhost systemd[1]: Reached target Switch Root.
Nov 25 19:27:29 localhost systemd[1]: Starting Switch Root...
Nov 25 19:27:29 localhost systemd[1]: Switching root.
Nov 25 19:27:29 localhost systemd-journald[306]: Journal stopped
Nov 25 19:27:30 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Nov 25 19:27:30 localhost kernel: audit: type=1404 audit(1764098849.834:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability open_perms=1
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:27:30 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:27:30 localhost kernel: audit: type=1403 audit(1764098849.989:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 19:27:30 localhost systemd[1]: Successfully loaded SELinux policy in 160.391ms.
Nov 25 19:27:30 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.168ms.
Nov 25 19:27:30 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 19:27:30 localhost systemd[1]: Detected virtualization kvm.
Nov 25 19:27:30 localhost systemd[1]: Detected architecture x86-64.
Nov 25 19:27:30 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:27:30 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Stopped Switch Root.
Nov 25 19:27:30 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 19:27:30 localhost systemd[1]: Created slice Slice /system/getty.
Nov 25 19:27:30 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 25 19:27:30 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 25 19:27:30 localhost systemd[1]: Created slice User and Session Slice.
Nov 25 19:27:30 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 19:27:30 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 25 19:27:30 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 19:27:30 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 19:27:30 localhost systemd[1]: Stopped target Switch Root.
Nov 25 19:27:30 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 25 19:27:30 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 25 19:27:30 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 25 19:27:30 localhost systemd[1]: Reached target Path Units.
Nov 25 19:27:30 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 25 19:27:30 localhost systemd[1]: Reached target Slice Units.
Nov 25 19:27:30 localhost systemd[1]: Reached target Swaps.
Nov 25 19:27:30 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 25 19:27:30 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 25 19:27:30 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 25 19:27:30 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 25 19:27:30 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 25 19:27:30 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 19:27:30 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 19:27:30 localhost systemd[1]: Mounting Huge Pages File System...
Nov 25 19:27:30 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 25 19:27:30 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 25 19:27:30 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 25 19:27:30 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 19:27:30 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 19:27:30 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 19:27:30 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 25 19:27:30 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 25 19:27:30 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 25 19:27:30 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 19:27:30 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 25 19:27:30 localhost systemd[1]: Stopped Journal Service.
Nov 25 19:27:30 localhost systemd[1]: Starting Journal Service...
Nov 25 19:27:30 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 19:27:30 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 25 19:27:30 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 19:27:30 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 25 19:27:30 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 19:27:30 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 19:27:30 localhost systemd-journald[678]: Journal started
Nov 25 19:27:30 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 19:27:30 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 25 19:27:30 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 19:27:30 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 19:27:30 localhost systemd[1]: Started Journal Service.
Nov 25 19:27:30 localhost systemd[1]: Mounted Huge Pages File System.
Nov 25 19:27:30 localhost kernel: ACPI: bus type drm_connector registered
Nov 25 19:27:30 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 19:27:30 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 25 19:27:30 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 25 19:27:30 localhost kernel: fuse: init (API version 7.37)
Nov 25 19:27:30 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 19:27:30 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 19:27:30 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 25 19:27:30 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 19:27:30 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 19:27:30 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 25 19:27:30 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 19:27:30 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 19:27:30 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 19:27:30 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 19:27:30 localhost systemd[1]: Mounting FUSE Control File System...
Nov 25 19:27:30 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 19:27:30 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 25 19:27:30 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 19:27:30 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 19:27:30 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 19:27:30 localhost systemd[1]: Starting Create System Users...
Nov 25 19:27:30 localhost systemd[1]: Mounted FUSE Control File System.
Nov 25 19:27:30 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 19:27:30 localhost systemd-journald[678]: Received client request to flush runtime journal.
Nov 25 19:27:30 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 19:27:30 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 19:27:30 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 19:27:30 localhost systemd[1]: Finished Create System Users.
Nov 25 19:27:30 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 19:27:30 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 19:27:30 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 19:27:30 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 19:27:30 localhost systemd[1]: Reached target Local File Systems.
Nov 25 19:27:30 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 19:27:30 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 19:27:30 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 19:27:30 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 19:27:30 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 19:27:30 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 19:27:30 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 19:27:30 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 25 19:27:30 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 19:27:31 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 19:27:31 localhost systemd[1]: Starting Security Auditing Service...
Nov 25 19:27:31 localhost systemd[1]: Starting RPC Bind...
Nov 25 19:27:31 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 19:27:31 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 19:27:31 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 19:27:31 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 19:27:31 localhost systemd[1]: Started RPC Bind.
Nov 25 19:27:31 localhost augenrules[708]: /sbin/augenrules: No change
Nov 25 19:27:31 localhost augenrules[723]: No rules
Nov 25 19:27:31 localhost augenrules[723]: enabled 1
Nov 25 19:27:31 localhost augenrules[723]: failure 1
Nov 25 19:27:31 localhost augenrules[723]: pid 703
Nov 25 19:27:31 localhost augenrules[723]: rate_limit 0
Nov 25 19:27:31 localhost augenrules[723]: backlog_limit 8192
Nov 25 19:27:31 localhost augenrules[723]: lost 0
Nov 25 19:27:31 localhost augenrules[723]: backlog 3
Nov 25 19:27:31 localhost augenrules[723]: backlog_wait_time 60000
Nov 25 19:27:31 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 25 19:27:31 localhost augenrules[723]: enabled 1
Nov 25 19:27:31 localhost augenrules[723]: failure 1
Nov 25 19:27:31 localhost augenrules[723]: pid 703
Nov 25 19:27:31 localhost augenrules[723]: rate_limit 0
Nov 25 19:27:31 localhost augenrules[723]: backlog_limit 8192
Nov 25 19:27:31 localhost augenrules[723]: lost 0
Nov 25 19:27:31 localhost augenrules[723]: backlog 0
Nov 25 19:27:31 localhost augenrules[723]: backlog_wait_time 60000
Nov 25 19:27:31 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 25 19:27:31 localhost augenrules[723]: enabled 1
Nov 25 19:27:31 localhost augenrules[723]: failure 1
Nov 25 19:27:31 localhost augenrules[723]: pid 703
Nov 25 19:27:31 localhost augenrules[723]: rate_limit 0
Nov 25 19:27:31 localhost augenrules[723]: backlog_limit 8192
Nov 25 19:27:31 localhost augenrules[723]: lost 0
Nov 25 19:27:31 localhost augenrules[723]: backlog 0
Nov 25 19:27:31 localhost augenrules[723]: backlog_wait_time 60000
Nov 25 19:27:31 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 25 19:27:31 localhost systemd[1]: Started Security Auditing Service.
Nov 25 19:27:31 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 19:27:31 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 19:27:31 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 25 19:27:31 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 19:27:31 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 19:27:31 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 19:27:31 localhost systemd[1]: Starting Update is Completed...
Nov 25 19:27:31 localhost systemd[1]: Finished Update is Completed.
Nov 25 19:27:31 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 19:27:31 localhost systemd[1]: Reached target System Initialization.
Nov 25 19:27:31 localhost systemd[1]: Started dnf makecache --timer.
Nov 25 19:27:31 localhost systemd[1]: Started Daily rotation of log files.
Nov 25 19:27:31 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 19:27:31 localhost systemd[1]: Reached target Timer Units.
Nov 25 19:27:31 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 19:27:31 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 19:27:31 localhost systemd[1]: Reached target Socket Units.
Nov 25 19:27:31 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 25 19:27:31 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 19:27:31 localhost systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:27:31 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 19:27:31 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 19:27:31 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 19:27:31 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 19:27:31 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 25 19:27:31 localhost dbus-broker-lau[768]: Ready
Nov 25 19:27:31 localhost systemd[1]: Reached target Basic System.
Nov 25 19:27:31 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 19:27:31 localhost systemd[1]: Starting NTP client/server...
Nov 25 19:27:31 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 19:27:31 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 19:27:31 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 19:27:31 localhost systemd[1]: Started irqbalance daemon.
Nov 25 19:27:31 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 19:27:31 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 19:27:31 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 19:27:31 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 19:27:31 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 25 19:27:31 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 19:27:31 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 19:27:31 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 19:27:31 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 19:27:31 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 25 19:27:31 localhost systemd[1]: Starting User Login Management...
Nov 25 19:27:31 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 19:27:31 localhost chronyd[799]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 19:27:31 localhost chronyd[799]: Loaded 0 symmetric keys
Nov 25 19:27:31 localhost chronyd[799]: Using right/UTC timezone to obtain leap second data
Nov 25 19:27:31 localhost chronyd[799]: Loaded seccomp filter (level 2)
Nov 25 19:27:31 localhost systemd[1]: Started NTP client/server.
Nov 25 19:27:31 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 19:27:31 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 19:27:31 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 19:27:31 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 19:27:31 localhost systemd-logind[789]: New seat seat0.
Nov 25 19:27:31 localhost systemd[1]: Started User Login Management.
Nov 25 19:27:31 localhost kernel: kvm_amd: TSC scaling supported
Nov 25 19:27:31 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 25 19:27:31 localhost kernel: kvm_amd: Nested Paging enabled
Nov 25 19:27:31 localhost kernel: kvm_amd: LBR virtualization supported
Nov 25 19:27:31 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 19:27:31 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 19:27:31 localhost kernel: Console: switching to colour dummy device 80x25
Nov 25 19:27:31 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 19:27:31 localhost kernel: [drm] features: -context_init
Nov 25 19:27:31 localhost kernel: [drm] number of scanouts: 1
Nov 25 19:27:31 localhost kernel: [drm] number of cap sets: 0
Nov 25 19:27:31 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 19:27:31 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 19:27:31 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 25 19:27:31 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 19:27:31 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Nov 25 19:27:31 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 19:27:32 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 19:27:32 +0000. Up 8.07 seconds.
Nov 25 19:27:32 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 25 19:27:32 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 25 19:27:32 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpo1o0mpph.mount: Deactivated successfully.
Nov 25 19:27:32 localhost systemd[1]: Starting Hostname Service...
Nov 25 19:27:32 localhost systemd[1]: Started Hostname Service.
Nov 25 19:27:32 np0005535736.novalocal systemd-hostnamed[854]: Hostname set to <np0005535736.novalocal> (static)
Nov 25 19:27:32 np0005535736.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 19:27:32 np0005535736.novalocal systemd[1]: Reached target Preparation for Network.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Starting Network Manager...
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.0935] NetworkManager (version 1.54.1-1.el9) is starting... (boot:d0551c86-76fe-4da9-b9a1-a5fabb73b624)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.0941] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1177] manager[0x55ffcc07f080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1242] hostname: hostname: using hostnamed
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1243] hostname: static hostname changed from (none) to "np0005535736.novalocal"
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1247] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1358] manager[0x55ffcc07f080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1360] manager[0x55ffcc07f080]: rfkill: WWAN hardware radio set enabled
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1481] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1481] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1482] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1483] manager: Networking is enabled by state file
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1485] settings: Loaded settings plugin: keyfile (internal)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1549] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1584] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1657] dhcp: init: Using DHCP client 'internal'
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1663] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1702] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1721] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1731] device (lo): Activation: starting connection 'lo' (907c96cc-9d5c-4708-9196-ba7e632419fa)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1744] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1750] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Started Network Manager.
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1806] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1812] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1817] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1820] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1824] device (eth0): carrier: link connected
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Reached target Network.
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1851] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1859] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1868] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1873] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1873] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1877] manager: NetworkManager state is now CONNECTING
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1879] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1886] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1890] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1931] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1934] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1940] device (lo): Activation: successful, device activated.
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1948] dhcp4 (eth0): state changed new lease, address=38.102.83.113
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1956] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1976] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1996] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.1997] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.2001] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.2003] device (eth0): Activation: successful, device activated.
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.2010] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 25 19:27:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764098853.2014] manager: startup complete
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Reached target NFS client services.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: Reached target Remote File Systems.
Nov 25 19:27:33 np0005535736.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 19:27:33 +0000. Up 9.23 seconds.
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.113         | 255.255.255.0 | global | fa:16:3e:03:6a:19 |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe03:6a19/64 |       .       |  link  | fa:16:3e:03:6a:19 |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 19:27:33 np0005535736.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 19:27:34 np0005535736.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Nov 25 19:27:34 np0005535736.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 25 19:27:34 np0005535736.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Nov 25 19:27:34 np0005535736.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Nov 25 19:27:34 np0005535736.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Nov 25 19:27:34 np0005535736.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Generating public/private rsa key pair.
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: The key fingerprint is:
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: SHA256:6lB9zYPJWdQZuURSsxc/8hP44Yl2N4dRemzmGUtvBtg root@np0005535736.novalocal
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: The key's randomart image is:
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: +---[RSA 3072]----+
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |            o+==.|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |           . +**o|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |            o+EBB|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |       . . B  OXX|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |      . S * +o.@B|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |     . . .  ...o=|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |    . .          |
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |     o           |
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |      .          |
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: +----[SHA256]-----+
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: The key fingerprint is:
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: SHA256:XnYV0CVsNXvKRtFTZ/MDNj/tu+gDozue1eLUm/M9Zlk root@np0005535736.novalocal
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: The key's randomart image is:
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: +---[ECDSA 256]---+
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |            .B+*B|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |            . B*O|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |             .o==|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |             + o+|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |        S o . + .|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |       . o +o.  E|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |        . .+oo .o|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |         o= .o+=o|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |        .+o..=*oo|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: +----[SHA256]-----+
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: The key fingerprint is:
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: SHA256:380bXHJZH+n4OATafcx27rwSflq9Guhu7lWgCGtnLYA root@np0005535736.novalocal
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: The key's randomart image is:
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: +--[ED25519 256]--+
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |                 |
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |      .         .|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |     E o   . . o.|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |        + = + * =|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |       oS* + +.O=|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |      . o...o=*=o|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |          ..o=*oo|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |          ...o+*.|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: |          == o*o+|
Nov 25 19:27:34 np0005535736.novalocal cloud-init[922]: +----[SHA256]-----+
Nov 25 19:27:34 np0005535736.novalocal sm-notify[1005]: Version 2.5.4 starting
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Reached target Network is Online.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting System Logging Service...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting Permit User Sessions...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Finished Permit User Sessions.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Started Command Scheduler.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Started Getty on tty1.
Nov 25 19:27:34 np0005535736.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Nov 25 19:27:34 np0005535736.novalocal sshd[1007]: Server listening on :: port 22.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Reached target Login Prompts.
Nov 25 19:27:34 np0005535736.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Nov 25 19:27:34 np0005535736.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 25 19:27:34 np0005535736.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 19% if used.)
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 25 19:27:34 np0005535736.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Nov 25 19:27:34 np0005535736.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Nov 25 19:27:34 np0005535736.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Started System Logging Service.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Reached target Multi-User System.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 19:27:34 np0005535736.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 19:27:35 np0005535736.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:27:35 np0005535736.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 25 19:27:35 np0005535736.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1114]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 19:27:35 +0000. Up 10.91 seconds.
Nov 25 19:27:35 np0005535736.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 19:27:35 np0005535736.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 19:27:35 np0005535736.novalocal dracut[1268]: dracut-057-102.git20250818.el9
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1271]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 19:27:35 +0000. Up 11.34 seconds.
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1286]: #############################################################
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1287]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1289]: 256 SHA256:XnYV0CVsNXvKRtFTZ/MDNj/tu+gDozue1eLUm/M9Zlk root@np0005535736.novalocal (ECDSA)
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1291]: 256 SHA256:380bXHJZH+n4OATafcx27rwSflq9Guhu7lWgCGtnLYA root@np0005535736.novalocal (ED25519)
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1293]: 3072 SHA256:6lB9zYPJWdQZuURSsxc/8hP44Yl2N4dRemzmGUtvBtg root@np0005535736.novalocal (RSA)
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1294]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1295]: #############################################################
Nov 25 19:27:35 np0005535736.novalocal cloud-init[1271]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 19:27:35 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.56 seconds
Nov 25 19:27:35 np0005535736.novalocal dracut[1270]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 19:27:35 np0005535736.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 19:27:35 np0005535736.novalocal systemd[1]: Reached target Cloud-init target.
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 19:27:36 np0005535736.novalocal dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1576]: Unable to negotiate with 38.102.83.114 port 48152: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1586]: Connection closed by 38.102.83.114 port 48166 [preauth]
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1595]: Unable to negotiate with 38.102.83.114 port 48172: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1611]: Unable to negotiate with 38.102.83.114 port 48184: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1553]: Connection closed by 38.102.83.114 port 57272 [preauth]
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1619]: Connection closed by 38.102.83.114 port 48190 [preauth]
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1717]: Unable to negotiate with 38.102.83.114 port 48206: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1726]: Unable to negotiate with 38.102.83.114 port 48212: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: memstrack is not available
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 19:27:37 np0005535736.novalocal sshd-session[1629]: Connection closed by 38.102.83.114 port 48196 [preauth]
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: memstrack is not available
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 19:27:37 np0005535736.novalocal dracut[1270]: *** Including module: systemd ***
Nov 25 19:27:38 np0005535736.novalocal dracut[1270]: *** Including module: fips ***
Nov 25 19:27:38 np0005535736.novalocal chronyd[799]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Nov 25 19:27:38 np0005535736.novalocal chronyd[799]: System clock TAI offset set to 37 seconds
Nov 25 19:27:38 np0005535736.novalocal dracut[1270]: *** Including module: systemd-initrd ***
Nov 25 19:27:38 np0005535736.novalocal dracut[1270]: *** Including module: i18n ***
Nov 25 19:27:38 np0005535736.novalocal dracut[1270]: *** Including module: drm ***
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]: *** Including module: prefixdevname ***
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]: *** Including module: kernel-modules ***
Nov 25 19:27:39 np0005535736.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]: *** Including module: kernel-modules-extra ***
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 25 19:27:39 np0005535736.novalocal dracut[1270]: *** Including module: qemu ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: fstab-sys ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: rootfs-block ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: terminfo ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: udev-rules ***
Nov 25 19:27:40 np0005535736.novalocal chronyd[799]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: Skipping udev rule: 91-permissions.rules
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: virtiofs ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: dracut-systemd ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: usrmount ***
Nov 25 19:27:40 np0005535736.novalocal dracut[1270]: *** Including module: base ***
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]: *** Including module: fs-lib ***
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]: *** Including module: kdumpbase ***
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:   microcode_ctl module: mangling fw_dir
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 19:27:41 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]: *** Including module: openssl ***
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]: *** Including module: shutdown ***
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]: *** Including module: squash ***
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]: *** Including modules done ***
Nov 25 19:27:42 np0005535736.novalocal dracut[1270]: *** Installing kernel module dependencies ***
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: IRQ 25 affinity is now unmanaged
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: IRQ 31 affinity is now unmanaged
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: IRQ 28 affinity is now unmanaged
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: IRQ 32 affinity is now unmanaged
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: IRQ 30 affinity is now unmanaged
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 19:27:42 np0005535736.novalocal irqbalance[781]: IRQ 29 affinity is now unmanaged
Nov 25 19:27:43 np0005535736.novalocal dracut[1270]: *** Installing kernel module dependencies done ***
Nov 25 19:27:43 np0005535736.novalocal dracut[1270]: *** Resolving executable dependencies ***
Nov 25 19:27:43 np0005535736.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:27:45 np0005535736.novalocal dracut[1270]: *** Resolving executable dependencies done ***
Nov 25 19:27:45 np0005535736.novalocal dracut[1270]: *** Generating early-microcode cpio image ***
Nov 25 19:27:45 np0005535736.novalocal dracut[1270]: *** Store current command line parameters ***
Nov 25 19:27:45 np0005535736.novalocal dracut[1270]: Stored kernel commandline:
Nov 25 19:27:45 np0005535736.novalocal dracut[1270]: No dracut internal kernel commandline stored in the initramfs
Nov 25 19:28:03 np0005535736.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:28:52 np0005535736.novalocal dracut[1270]: *** Install squash loader ***
Nov 25 19:28:53 np0005535736.novalocal dracut[1270]: *** Squashing the files inside the initramfs ***
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: *** Squashing the files inside the initramfs done ***
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: *** Hardlinking files ***
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Mode:           real
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Files:          50
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Linked:         0 files
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Compared:       0 xattrs
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Compared:       0 files
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Saved:          0 B
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: Duration:       0.000488 seconds
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: *** Hardlinking files done ***
Nov 25 19:28:54 np0005535736.novalocal dracut[1270]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 19:28:55 np0005535736.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 25 19:28:55 np0005535736.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 25 19:28:55 np0005535736.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 25 19:28:55 np0005535736.novalocal systemd[1]: Startup finished in 1.800s (kernel) + 3.737s (initrd) + 1min 25.542s (userspace) = 1min 31.080s.
Nov 25 19:29:45 np0005535736.novalocal sshd-session[4299]: Accepted publickey for zuul from 38.102.83.114 port 57998 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 25 19:29:45 np0005535736.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 25 19:29:45 np0005535736.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 19:29:45 np0005535736.novalocal systemd-logind[789]: New session 1 of user zuul.
Nov 25 19:29:45 np0005535736.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 19:29:45 np0005535736.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 25 19:29:45 np0005535736.novalocal systemd[4303]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Queued start job for default target Main User Target.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Created slice User Application Slice.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Reached target Paths.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Reached target Timers.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Starting D-Bus User Message Bus Socket...
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Starting Create User's Volatile Files and Directories...
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Finished Create User's Volatile Files and Directories.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Listening on D-Bus User Message Bus Socket.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Reached target Sockets.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Reached target Basic System.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Reached target Main User Target.
Nov 25 19:29:46 np0005535736.novalocal systemd[4303]: Startup finished in 126ms.
Nov 25 19:29:46 np0005535736.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 25 19:29:46 np0005535736.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 25 19:29:46 np0005535736.novalocal sshd-session[4299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:29:46 np0005535736.novalocal python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:29:49 np0005535736.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:29:55 np0005535736.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:29:56 np0005535736.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 19:29:58 np0005535736.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVv+GaypK306cULNrKWp05OCJFX4YgyA/2HEbscVAUtHFLoXl2AhsRw13dfLq1snRPuq+u5CAcLQxKPlb2t+4LhSKZET0dTStrp48iq019Am/AOKtOcaQVCQMapId17Xf3+g5Ck5rWUg9fnIEdBQveVZLoOjDiTLXzdNPbT/IJUQUDTvuw+L/I0PuhuSWcF1KrS/o4xan+Mm/xlVHvOFRBOI9ONganwJW5dTQSFNhUklIqGbIdhUnJKkoUKp5bNhjXSPANhFIgq6xKPRYMmoWXpjVSFemDkX2PayyTyF86azbbjfm0aM1z0cF5bUe4ErcL/CMJUda/69Lyn5i/a+qlr0RmSnSHiJVVQwbCGnovisqEE1JmQwf4a4RBV4PlIps1XYkUKq4tt7wnK32ZAIFZUqXI2fIhWWGXfoYbh+aVq9p8GwEzR6dmcwRu7t1OiLZjYezplU0t5AVoRVtd7sakB5be0l7RopiJQyKiYca6ZEsBfs6gMYd1uMdab0ChGDE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:29:58 np0005535736.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:29:59 np0005535736.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:29:59 np0005535736.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764098998.6685307-207-149784618277931/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=189f01ba10db43d6ad44aac645f126b5_id_rsa follow=False checksum=594d911ba8577f734410c4fbad07d48b219d1ea3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:29:59 np0005535736.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:00 np0005535736.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764098999.6250753-240-69930399527015/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=189f01ba10db43d6ad44aac645f126b5_id_rsa.pub follow=False checksum=abf64fa02f7e97eb13e851bfbcaa1232740ecd25 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:01 np0005535736.novalocal python3[4973]: ansible-ping Invoked with data=pong
Nov 25 19:30:02 np0005535736.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:30:04 np0005535736.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 19:30:05 np0005535736.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:05 np0005535736.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:05 np0005535736.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:06 np0005535736.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:06 np0005535736.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:06 np0005535736.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:08 np0005535736.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgaicljdquusmuxyqkorpphgnkronwl ; /usr/bin/python3'
Nov 25 19:30:08 np0005535736.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:08 np0005535736.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:08 np0005535736.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:08 np0005535736.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvwslcwkzsbujrecfschmrzjupnjeiwp ; /usr/bin/python3'
Nov 25 19:30:08 np0005535736.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:09 np0005535736.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:09 np0005535736.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:09 np0005535736.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kedqixxscyttnoaxbqqvjanrtwevaccp ; /usr/bin/python3'
Nov 25 19:30:09 np0005535736.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:09 np0005535736.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764099008.655243-21-78691470796188/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:09 np0005535736.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:10 np0005535736.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:10 np0005535736.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:10 np0005535736.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:11 np0005535736.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:11 np0005535736.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:11 np0005535736.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:12 np0005535736.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:12 np0005535736.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:12 np0005535736.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:12 np0005535736.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:13 np0005535736.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:13 np0005535736.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:13 np0005535736.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:14 np0005535736.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:14 np0005535736.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:14 np0005535736.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:15 np0005535736.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:15 np0005535736.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:15 np0005535736.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:16 np0005535736.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:16 np0005535736.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:16 np0005535736.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:16 np0005535736.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:17 np0005535736.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:17 np0005535736.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:17 np0005535736.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:20 np0005535736.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhswxdkbhpxyhpygscevlazwfrzvltuf ; /usr/bin/python3'
Nov 25 19:30:20 np0005535736.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:20 np0005535736.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 19:30:20 np0005535736.novalocal systemd[1]: Starting Time & Date Service...
Nov 25 19:30:20 np0005535736.novalocal systemd[1]: Started Time & Date Service.
Nov 25 19:30:20 np0005535736.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 25 19:30:20 np0005535736.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:20 np0005535736.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acifftdiapmzparusutengusflucvxxr ; /usr/bin/python3'
Nov 25 19:30:20 np0005535736.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:21 np0005535736.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:21 np0005535736.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:21 np0005535736.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:21 np0005535736.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764099021.22898-153-133129971044358/source _original_basename=tmp3j04veuw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:22 np0005535736.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:22 np0005535736.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764099022.1431794-183-252698493687716/source _original_basename=tmpou3n6fuu follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:23 np0005535736.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdrqnmqogpikuvlpaqkcslvzrjthpopd ; /usr/bin/python3'
Nov 25 19:30:23 np0005535736.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:23 np0005535736.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:23 np0005535736.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:23 np0005535736.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkvunsgyvkunxzgwixxyojjpgmkdpbrh ; /usr/bin/python3'
Nov 25 19:30:23 np0005535736.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:23 np0005535736.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764099023.272505-231-181548444770406/source _original_basename=tmpwu7yfqtl follow=False checksum=0200c222fd008cff1969c6c814381aad26405e22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:23 np0005535736.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:24 np0005535736.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:30:24 np0005535736.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:30:25 np0005535736.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvctttaarmbpilqubviwuwwkeeagtyxr ; /usr/bin/python3'
Nov 25 19:30:25 np0005535736.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:25 np0005535736.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:25 np0005535736.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:25 np0005535736.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdnxxyqdubqvivajnhoexbclopiwwrrw ; /usr/bin/python3'
Nov 25 19:30:25 np0005535736.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:25 np0005535736.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764099025.057704-273-72973178991513/source _original_basename=tmpdbanljze follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:25 np0005535736.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:26 np0005535736.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukrrybpylroaguolhlclfbdpkgaqiquh ; /usr/bin/python3'
Nov 25 19:30:26 np0005535736.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:26 np0005535736.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-3e94-5f97-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:30:26 np0005535736.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:27 np0005535736.novalocal python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-3e94-5f97-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 19:30:28 np0005535736.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:45 np0005535736.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqufzyzrmkpfqjinbcokaxslthcrxoci ; /usr/bin/python3'
Nov 25 19:30:45 np0005535736.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:30:45 np0005535736.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:45 np0005535736.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Nov 25 19:30:50 np0005535736.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 19:31:21 np0005535736.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 19:31:21 np0005535736.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1099] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 19:31:21 np0005535736.novalocal systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1348] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1382] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1386] device (eth1): carrier: link connected
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1388] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1395] policy: auto-activating connection 'Wired connection 1' (9222631e-5368-3ea4-b024-56475051c0e7)
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1399] device (eth1): Activation: starting connection 'Wired connection 1' (9222631e-5368-3ea4-b024-56475051c0e7)
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1400] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1403] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1408] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:31:21 np0005535736.novalocal NetworkManager[858]: <info>  [1764099081.1413] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:31:22 np0005535736.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-5c2d-4124-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:31:31 np0005535736.novalocal sshd-session[6976]: error: kex_exchange_identification: read: Connection reset by peer
Nov 25 19:31:31 np0005535736.novalocal sshd-session[6976]: Connection reset by 45.140.17.97 port 9786
Nov 25 19:31:32 np0005535736.novalocal sudo[7052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdswglqayqlgngracyhismyprubhfyme ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 19:31:32 np0005535736.novalocal sudo[7052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:31:32 np0005535736.novalocal python3[7054]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:31:32 np0005535736.novalocal sudo[7052]: pam_unix(sudo:session): session closed for user root
Nov 25 19:31:32 np0005535736.novalocal sudo[7125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssrelkdzncgdzsvkkxihahopgytcccou ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 19:31:32 np0005535736.novalocal sudo[7125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:31:32 np0005535736.novalocal python3[7127]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764099091.9466357-102-86725021188769/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=eef081f0e813fc7b4cce297bffc9a970a3cded20 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:31:32 np0005535736.novalocal sudo[7125]: pam_unix(sudo:session): session closed for user root
Nov 25 19:31:33 np0005535736.novalocal sudo[7175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttnaynogvhniewhvzpfndxmsnpqqzsko ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 19:31:33 np0005535736.novalocal sudo[7175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:31:33 np0005535736.novalocal python3[7177]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4873] caught SIGTERM, shutting down normally.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Stopping Network Manager...
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4880] dhcp4 (eth0): canceled DHCP transaction
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4880] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4880] dhcp4 (eth0): state changed no lease
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4881] manager: NetworkManager state is now CONNECTING
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4938] dhcp4 (eth1): canceled DHCP transaction
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4938] dhcp4 (eth1): state changed no lease
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[858]: <info>  [1764099093.4974] exiting (success)
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Stopped Network Manager.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: NetworkManager.service: Consumed 1.659s CPU time, 10.0M memory peak.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Starting Network Manager...
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.5407] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:d0551c86-76fe-4da9-b9a1-a5fabb73b624)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.5408] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.5460] manager[0x55f852d7e070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Starting Hostname Service...
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Started Hostname Service.
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6604] hostname: hostname: using hostnamed
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6605] hostname: static hostname changed from (none) to "np0005535736.novalocal"
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6613] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6622] manager[0x55f852d7e070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6622] manager[0x55f852d7e070]: rfkill: WWAN hardware radio set enabled
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6678] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6679] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6680] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6680] manager: Networking is enabled by state file
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6684] settings: Loaded settings plugin: keyfile (internal)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6691] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6739] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6754] dhcp: init: Using DHCP client 'internal'
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6759] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6769] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6778] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6793] device (lo): Activation: starting connection 'lo' (907c96cc-9d5c-4708-9196-ba7e632419fa)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6805] device (eth0): carrier: link connected
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6813] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6822] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6823] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6835] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6849] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6860] device (eth1): carrier: link connected
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6866] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6873] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (9222631e-5368-3ea4-b024-56475051c0e7) (indicated)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6874] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6884] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6895] device (eth1): Activation: starting connection 'Wired connection 1' (9222631e-5368-3ea4-b024-56475051c0e7)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6906] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Started Network Manager.
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6912] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6915] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6917] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6920] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6923] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6926] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6930] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6933] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6943] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6946] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6961] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6964] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6995] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.6997] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7006] device (lo): Activation: successful, device activated.
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7015] dhcp4 (eth0): state changed new lease, address=38.102.83.113
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7027] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 19:31:33 np0005535736.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7100] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7120] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7124] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7131] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7138] device (eth0): Activation: successful, device activated.
Nov 25 19:31:33 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099093.7146] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 19:31:33 np0005535736.novalocal sudo[7175]: pam_unix(sudo:session): session closed for user root
Nov 25 19:31:34 np0005535736.novalocal python3[7261]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-5c2d-4124-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:31:43 np0005535736.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:31:46 np0005535736.novalocal systemd[4303]: Starting Mark boot as successful...
Nov 25 19:31:46 np0005535736.novalocal systemd[4303]: Finished Mark boot as successful.
Nov 25 19:32:03 np0005535736.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.2856] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 19:32:19 np0005535736.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:32:19 np0005535736.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3166] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3179] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3206] device (eth1): Activation: successful, device activated.
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3224] manager: startup complete
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3229] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <warn>  [1764099139.3252] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3267] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3399] dhcp4 (eth1): canceled DHCP transaction
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3400] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3400] dhcp4 (eth1): state changed no lease
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3424] policy: auto-activating connection 'ci-private-network' (29a28c5d-7338-527a-8ab3-91e82e4be558)
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3430] device (eth1): Activation: starting connection 'ci-private-network' (29a28c5d-7338-527a-8ab3-91e82e4be558)
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3432] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3436] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3451] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.3462] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.5027] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.5029] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:32:19 np0005535736.novalocal NetworkManager[7181]: <info>  [1764099139.5033] device (eth1): Activation: successful, device activated.
Nov 25 19:32:29 np0005535736.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:32:34 np0005535736.novalocal sshd-session[4312]: Received disconnect from 38.102.83.114 port 57998:11: disconnected by user
Nov 25 19:32:34 np0005535736.novalocal sshd-session[4312]: Disconnected from user zuul 38.102.83.114 port 57998
Nov 25 19:32:34 np0005535736.novalocal sshd-session[4299]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:32:34 np0005535736.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Nov 25 19:32:35 np0005535736.novalocal sshd-session[7290]: Accepted publickey for zuul from 38.102.83.114 port 47128 ssh2: RSA SHA256:A2IzWGkyPIJ9qDfl3onK8K/RA0W663rQ8oKe3YJ11n4
Nov 25 19:32:35 np0005535736.novalocal systemd-logind[789]: New session 3 of user zuul.
Nov 25 19:32:35 np0005535736.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 25 19:32:35 np0005535736.novalocal sshd-session[7290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:32:35 np0005535736.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agzofduydnjfywftlbohczwwbpbqhvca ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 19:32:35 np0005535736.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:32:35 np0005535736.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:32:35 np0005535736.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Nov 25 19:32:35 np0005535736.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzpxyjmsdnzcgtwhikcjzqmhtwrabxsl ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 19:32:35 np0005535736.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:32:36 np0005535736.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764099155.279889-267-43707609070440/source _original_basename=tmp3bg78i0_ follow=False checksum=33868afb5fb538162ca8d12ad86f46ed4e3544db backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:32:36 np0005535736.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Nov 25 19:32:38 np0005535736.novalocal sshd-session[7293]: Connection closed by 38.102.83.114 port 47128
Nov 25 19:32:38 np0005535736.novalocal sshd-session[7290]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:32:38 np0005535736.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 19:32:38 np0005535736.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Nov 25 19:32:38 np0005535736.novalocal systemd-logind[789]: Removed session 3.
Nov 25 19:33:32 np0005535736.novalocal sshd-session[7471]: Connection closed by 205.210.31.148 port 56819
Nov 25 19:34:46 np0005535736.novalocal systemd[4303]: Created slice User Background Tasks Slice.
Nov 25 19:34:46 np0005535736.novalocal systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 19:34:46 np0005535736.novalocal systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 19:39:46 np0005535736.novalocal sshd-session[7477]: Accepted publickey for zuul from 38.102.83.114 port 44752 ssh2: RSA SHA256:A2IzWGkyPIJ9qDfl3onK8K/RA0W663rQ8oKe3YJ11n4
Nov 25 19:39:46 np0005535736.novalocal systemd-logind[789]: New session 4 of user zuul.
Nov 25 19:39:46 np0005535736.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 25 19:39:46 np0005535736.novalocal sshd-session[7477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:39:46 np0005535736.novalocal sudo[7504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chxbkmhcslxvqziqbaopcpxlmmwcjkdf ; /usr/bin/python3'
Nov 25 19:39:46 np0005535736.novalocal sudo[7504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:46 np0005535736.novalocal python3[7506]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f0e4-e323-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:39:46 np0005535736.novalocal sudo[7504]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:46 np0005535736.novalocal sudo[7533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqlhcrdkahdioyhbufnydetnputxpbfl ; /usr/bin/python3'
Nov 25 19:39:46 np0005535736.novalocal sudo[7533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:46 np0005535736.novalocal python3[7535]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:39:46 np0005535736.novalocal sudo[7533]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:47 np0005535736.novalocal sudo[7559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyoigbbejpnxghuqwilpjxveqsavqmg ; /usr/bin/python3'
Nov 25 19:39:47 np0005535736.novalocal sudo[7559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:47 np0005535736.novalocal python3[7561]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:39:47 np0005535736.novalocal sudo[7559]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:47 np0005535736.novalocal sudo[7585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bliwzqdsxajecttgrprzuffmgduhlldv ; /usr/bin/python3'
Nov 25 19:39:47 np0005535736.novalocal sudo[7585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:47 np0005535736.novalocal python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:39:47 np0005535736.novalocal sudo[7585]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:47 np0005535736.novalocal sudo[7611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syqybwdxdtuuqimznokdueabsddggmmf ; /usr/bin/python3'
Nov 25 19:39:47 np0005535736.novalocal sudo[7611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:47 np0005535736.novalocal python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:39:47 np0005535736.novalocal sudo[7611]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:48 np0005535736.novalocal sudo[7637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztmgwqpkdxcarbmcesgxomwkmnnibcqe ; /usr/bin/python3'
Nov 25 19:39:48 np0005535736.novalocal sudo[7637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:48 np0005535736.novalocal python3[7639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:39:48 np0005535736.novalocal sudo[7637]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:48 np0005535736.novalocal sudo[7715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ardqzilcvvfwsucrtnwzcijvubqdqcof ; /usr/bin/python3'
Nov 25 19:39:48 np0005535736.novalocal sudo[7715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:48 np0005535736.novalocal python3[7717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:39:48 np0005535736.novalocal sudo[7715]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:49 np0005535736.novalocal sudo[7788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgssqhivwzmxabrngqgtsplxdjtraky ; /usr/bin/python3'
Nov 25 19:39:49 np0005535736.novalocal sudo[7788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:49 np0005535736.novalocal python3[7790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764099588.5968027-478-140571899397571/source _original_basename=tmpmid6gkdm follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:39:49 np0005535736.novalocal sudo[7788]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:50 np0005535736.novalocal sudo[7838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizfeajndposkkvvkpdprgbhswbsnsxp ; /usr/bin/python3'
Nov 25 19:39:50 np0005535736.novalocal sudo[7838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:50 np0005535736.novalocal python3[7840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 19:39:50 np0005535736.novalocal systemd[1]: Reloading.
Nov 25 19:39:50 np0005535736.novalocal systemd-rc-local-generator[7861]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:39:50 np0005535736.novalocal sudo[7838]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:51 np0005535736.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwzxqzoedngiuvyetlsyhifgkqulvjpt ; /usr/bin/python3'
Nov 25 19:39:51 np0005535736.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:51 np0005535736.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 19:39:52 np0005535736.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:52 np0005535736.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvjqplveptlnxpqfkvyiojrwdbimiwhf ; /usr/bin/python3'
Nov 25 19:39:52 np0005535736.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:52 np0005535736.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:39:52 np0005535736.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:52 np0005535736.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzjxobpoimhvcauqwosmssefauergpfw ; /usr/bin/python3'
Nov 25 19:39:52 np0005535736.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:52 np0005535736.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:39:52 np0005535736.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:52 np0005535736.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohaevjljlvpxurwmraxdrkqktvydwmlj ; /usr/bin/python3'
Nov 25 19:39:52 np0005535736.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:53 np0005535736.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:39:53 np0005535736.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:53 np0005535736.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcexttxegywzvkpbppsususwdzvoswso ; /usr/bin/python3'
Nov 25 19:39:53 np0005535736.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:53 np0005535736.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:39:53 np0005535736.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Nov 25 19:39:53 np0005535736.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f0e4-e323-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:39:54 np0005535736.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 19:39:56 np0005535736.novalocal sshd-session[7480]: Connection closed by 38.102.83.114 port 44752
Nov 25 19:39:56 np0005535736.novalocal sshd-session[7477]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:39:56 np0005535736.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Nov 25 19:39:56 np0005535736.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 19:39:56 np0005535736.novalocal systemd[1]: session-4.scope: Consumed 4.594s CPU time.
Nov 25 19:39:56 np0005535736.novalocal systemd-logind[789]: Removed session 4.
Nov 25 19:39:58 np0005535736.novalocal sshd-session[8071]: Accepted publickey for zuul from 38.102.83.114 port 48202 ssh2: RSA SHA256:A2IzWGkyPIJ9qDfl3onK8K/RA0W663rQ8oKe3YJ11n4
Nov 25 19:39:58 np0005535736.novalocal systemd-logind[789]: New session 5 of user zuul.
Nov 25 19:39:58 np0005535736.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 25 19:39:58 np0005535736.novalocal sshd-session[8071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:39:58 np0005535736.novalocal sudo[8098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eftvqgehrrieyjcgsavjasuvwyqeijkv ; /usr/bin/python3'
Nov 25 19:39:58 np0005535736.novalocal sudo[8098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:39:58 np0005535736.novalocal python3[8100]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 19:40:12 np0005535736.novalocal irqbalance[781]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 25 19:40:12 np0005535736.novalocal irqbalance[781]: IRQ 27 affinity is now unmanaged
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:40:13 np0005535736.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:40:22 np0005535736.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:40:30 np0005535736.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:40:32 np0005535736.novalocal setsebool[8168]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 19:40:32 np0005535736.novalocal setsebool[8168]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:40:42 np0005535736.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:41:01 np0005535736.novalocal dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 19:41:01 np0005535736.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:41:01 np0005535736.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:41:01 np0005535736.novalocal systemd[1]: Reloading.
Nov 25 19:41:01 np0005535736.novalocal systemd-rc-local-generator[8921]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:41:02 np0005535736.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:41:02 np0005535736.novalocal sudo[8098]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:08 np0005535736.novalocal python3[13336]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-498f-b0a9-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:41:08 np0005535736.novalocal kernel: evm: overlay not supported
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: Starting D-Bus User Message Bus...
Nov 25 19:41:08 np0005535736.novalocal dbus-broker-launch[13963]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 19:41:08 np0005535736.novalocal dbus-broker-launch[13963]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: Started D-Bus User Message Bus.
Nov 25 19:41:08 np0005535736.novalocal dbus-broker-lau[13963]: Ready
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: Created slice Slice /user.
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: podman-13944.scope: unit configures an IP firewall, but not running as root.
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 19:41:08 np0005535736.novalocal systemd[4303]: Started podman-13944.scope.
Nov 25 19:41:09 np0005535736.novalocal systemd[4303]: Started podman-pause-7a4ab226.scope.
Nov 25 19:41:09 np0005535736.novalocal sudo[14077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqsebrkbxffjkxqalfrtdmyshjjonpf ; /usr/bin/python3'
Nov 25 19:41:09 np0005535736.novalocal sudo[14077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:09 np0005535736.novalocal python3[14079]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.94:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.94:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:41:09 np0005535736.novalocal python3[14079]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 19:41:09 np0005535736.novalocal sudo[14077]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:10 np0005535736.novalocal sshd-session[8074]: Connection closed by 38.102.83.114 port 48202
Nov 25 19:41:10 np0005535736.novalocal sshd-session[8071]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:41:10 np0005535736.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 19:41:10 np0005535736.novalocal systemd[1]: session-5.scope: Consumed 59.888s CPU time.
Nov 25 19:41:10 np0005535736.novalocal systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Nov 25 19:41:10 np0005535736.novalocal systemd-logind[789]: Removed session 5.
Nov 25 19:41:28 np0005535736.novalocal sshd-session[20909]: Unable to negotiate with 38.102.83.150 port 41024: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 19:41:28 np0005535736.novalocal sshd-session[20908]: Connection closed by 38.102.83.150 port 41022 [preauth]
Nov 25 19:41:28 np0005535736.novalocal sshd-session[20910]: Unable to negotiate with 38.102.83.150 port 41038: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 19:41:28 np0005535736.novalocal sshd-session[20912]: Connection closed by 38.102.83.150 port 41014 [preauth]
Nov 25 19:41:28 np0005535736.novalocal sshd-session[20911]: Unable to negotiate with 38.102.83.150 port 41032: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 19:41:33 np0005535736.novalocal sshd-session[22196]: Accepted publickey for zuul from 38.102.83.114 port 54036 ssh2: RSA SHA256:A2IzWGkyPIJ9qDfl3onK8K/RA0W663rQ8oKe3YJ11n4
Nov 25 19:41:33 np0005535736.novalocal systemd-logind[789]: New session 6 of user zuul.
Nov 25 19:41:33 np0005535736.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 25 19:41:33 np0005535736.novalocal sshd-session[22196]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:41:33 np0005535736.novalocal python3[22306]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD3a+lWAHfACDwGjj9ENgqEoF4LjCbx6Me405mdcqTH8tIcG4gwUE0m8B2x5oI0WwquaBjTnDPl85WYWh+mr8uE= zuul@np0005535735.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:41:33 np0005535736.novalocal sudo[22461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuujnwvgxemxvieunupjbbfkiaclgufh ; /usr/bin/python3'
Nov 25 19:41:33 np0005535736.novalocal sudo[22461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:33 np0005535736.novalocal python3[22470]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD3a+lWAHfACDwGjj9ENgqEoF4LjCbx6Me405mdcqTH8tIcG4gwUE0m8B2x5oI0WwquaBjTnDPl85WYWh+mr8uE= zuul@np0005535735.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:41:33 np0005535736.novalocal sudo[22461]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:34 np0005535736.novalocal sudo[22737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcdqujecvuzgrtgtvikvtpqdfeoaoii ; /usr/bin/python3'
Nov 25 19:41:34 np0005535736.novalocal sudo[22737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:34 np0005535736.novalocal python3[22746]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005535736.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 19:41:34 np0005535736.novalocal useradd[22816]: new group: name=cloud-admin, GID=1002
Nov 25 19:41:34 np0005535736.novalocal useradd[22816]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 25 19:41:35 np0005535736.novalocal sudo[22737]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:35 np0005535736.novalocal sudo[23007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpockjuoalhkmfiwngzwzicmnsgrmrf ; /usr/bin/python3'
Nov 25 19:41:35 np0005535736.novalocal sudo[23007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:35 np0005535736.novalocal python3[23016]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD3a+lWAHfACDwGjj9ENgqEoF4LjCbx6Me405mdcqTH8tIcG4gwUE0m8B2x5oI0WwquaBjTnDPl85WYWh+mr8uE= zuul@np0005535735.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:41:35 np0005535736.novalocal sudo[23007]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:35 np0005535736.novalocal sudo[23139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxaomqwsoukluyhuykdhhtuoaocyjam ; /usr/bin/python3'
Nov 25 19:41:35 np0005535736.novalocal sudo[23139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:36 np0005535736.novalocal python3[23145]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:41:36 np0005535736.novalocal sudo[23139]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:36 np0005535736.novalocal sudo[23364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogduakvmlkxuhdpxmjdjvumrzlcylspt ; /usr/bin/python3'
Nov 25 19:41:36 np0005535736.novalocal sudo[23364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:36 np0005535736.novalocal python3[23373]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764099695.7347512-135-265872274552156/source _original_basename=tmp_1dg6iqp follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:41:36 np0005535736.novalocal sudo[23364]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:37 np0005535736.novalocal sudo[23600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elniyvbfnckimdcroullvskkuvvsnwua ; /usr/bin/python3'
Nov 25 19:41:37 np0005535736.novalocal sudo[23600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:41:37 np0005535736.novalocal python3[23607]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 19:41:37 np0005535736.novalocal systemd[1]: Starting Hostname Service...
Nov 25 19:41:37 np0005535736.novalocal systemd[1]: Started Hostname Service.
Nov 25 19:41:37 np0005535736.novalocal systemd-hostnamed[23672]: Changed pretty hostname to 'compute-0'
Nov 25 19:41:37 compute-0 systemd-hostnamed[23672]: Hostname set to <compute-0> (static)
Nov 25 19:41:37 compute-0 NetworkManager[7181]: <info>  [1764099697.6041] hostname: static hostname changed from "np0005535736.novalocal" to "compute-0"
Nov 25 19:41:37 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:41:37 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:41:37 compute-0 sudo[23600]: pam_unix(sudo:session): session closed for user root
Nov 25 19:41:37 compute-0 sshd-session[22251]: Connection closed by 38.102.83.114 port 54036
Nov 25 19:41:37 compute-0 sshd-session[22196]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:41:37 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 19:41:37 compute-0 systemd[1]: session-6.scope: Consumed 2.555s CPU time.
Nov 25 19:41:37 compute-0 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Nov 25 19:41:37 compute-0 systemd-logind[789]: Removed session 6.
Nov 25 19:41:47 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:41:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:41:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:41:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 4.504s CPU time.
Nov 25 19:41:57 compute-0 systemd[1]: run-radb4723c42534f78ad5105e77999906a.service: Deactivated successfully.
Nov 25 19:42:07 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:42:46 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 19:42:46 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 19:42:46 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 19:42:46 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 19:45:18 compute-0 sshd-session[29929]: Accepted publickey for zuul from 38.102.83.150 port 37710 ssh2: RSA SHA256:A2IzWGkyPIJ9qDfl3onK8K/RA0W663rQ8oKe3YJ11n4
Nov 25 19:45:18 compute-0 systemd-logind[789]: New session 7 of user zuul.
Nov 25 19:45:18 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 25 19:45:18 compute-0 sshd-session[29929]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:45:19 compute-0 python3[30005]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:45:20 compute-0 sudo[30119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvujuuvkmcnkfpbbgzubhkbnghwmrrus ; /usr/bin/python3'
Nov 25 19:45:20 compute-0 sudo[30119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:20 compute-0 python3[30121]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:20 compute-0 sudo[30119]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:21 compute-0 sudo[30192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfhcxtphuntkhnflpwwvzcyiekimrurq ; /usr/bin/python3'
Nov 25 19:45:21 compute-0 sudo[30192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:21 compute-0 python3[30194]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:21 compute-0 sudo[30192]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:21 compute-0 sudo[30218]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wravcwhtvawoobupmpebbojrsqknekdb ; /usr/bin/python3'
Nov 25 19:45:21 compute-0 sudo[30218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:21 compute-0 python3[30220]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:21 compute-0 sudo[30218]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:21 compute-0 sudo[30291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyiqskjutnemkngcsyhkpjsvyyauwvhp ; /usr/bin/python3'
Nov 25 19:45:21 compute-0 sudo[30291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:22 compute-0 python3[30293]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:22 compute-0 sudo[30291]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:22 compute-0 sudo[30317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfwjkisvwgxzbptazaikxwjsxhoccprn ; /usr/bin/python3'
Nov 25 19:45:22 compute-0 sudo[30317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:22 compute-0 python3[30319]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:22 compute-0 sudo[30317]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:22 compute-0 sudo[30390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvxkupdvqmjqqaetcutzgohgjyeldgeo ; /usr/bin/python3'
Nov 25 19:45:22 compute-0 sudo[30390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:22 compute-0 python3[30392]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:22 compute-0 sudo[30390]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:22 compute-0 sudo[30416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucgmytptlywkjswatwvsbwhcyhdjxasg ; /usr/bin/python3'
Nov 25 19:45:22 compute-0 sudo[30416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:23 compute-0 python3[30418]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:23 compute-0 sudo[30416]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:23 compute-0 sudo[30489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hobmhtmnndhhenrqsqbpjeloizarqizx ; /usr/bin/python3'
Nov 25 19:45:23 compute-0 sudo[30489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:23 compute-0 python3[30491]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:23 compute-0 sudo[30489]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:23 compute-0 sudo[30515]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvirdsudojukrdsujfyzrixuyervjnjt ; /usr/bin/python3'
Nov 25 19:45:23 compute-0 sudo[30515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:23 compute-0 python3[30517]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:23 compute-0 sudo[30515]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:24 compute-0 sudo[30588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhxnmjipowtnrzqxawhkysswrdjcgsbl ; /usr/bin/python3'
Nov 25 19:45:24 compute-0 sudo[30588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:24 compute-0 python3[30590]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:24 compute-0 sudo[30588]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:24 compute-0 sudo[30614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlfvwujoykqwkslzgcuiiazhtfvufmpl ; /usr/bin/python3'
Nov 25 19:45:24 compute-0 sudo[30614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:24 compute-0 python3[30616]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:24 compute-0 sudo[30614]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:24 compute-0 sudo[30687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxaewanbulpnnngiphrcggckzlrqflil ; /usr/bin/python3'
Nov 25 19:45:24 compute-0 sudo[30687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:24 compute-0 python3[30689]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:24 compute-0 sudo[30687]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:25 compute-0 sudo[30713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsfawdjflfnyxlauejvfnbdbsniakekz ; /usr/bin/python3'
Nov 25 19:45:25 compute-0 sudo[30713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:25 compute-0 python3[30715]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:45:25 compute-0 sudo[30713]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:25 compute-0 sudo[30786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgiowizrcyidondraqadkeczzpahehac ; /usr/bin/python3'
Nov 25 19:45:25 compute-0 sudo[30786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:45:25 compute-0 python3[30788]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764099920.5958285-33603-80980246814756/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:45:25 compute-0 sudo[30786]: pam_unix(sudo:session): session closed for user root
Nov 25 19:45:27 compute-0 sshd-session[30816]: Connection closed by 192.168.122.11 port 51038 [preauth]
Nov 25 19:45:27 compute-0 sshd-session[30813]: Connection closed by 192.168.122.11 port 51044 [preauth]
Nov 25 19:45:27 compute-0 sshd-session[30817]: Unable to negotiate with 192.168.122.11 port 51054: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 19:45:27 compute-0 sshd-session[30814]: Unable to negotiate with 192.168.122.11 port 51068: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 19:45:27 compute-0 sshd-session[30815]: Unable to negotiate with 192.168.122.11 port 51076: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 19:45:37 compute-0 python3[30846]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:45:55 compute-0 sshd-session[30848]: Invalid user bobby from 193.32.162.157 port 40498
Nov 25 19:45:55 compute-0 sshd-session[30848]: Connection closed by invalid user bobby 193.32.162.157 port 40498 [preauth]
Nov 25 19:45:59 compute-0 sshd-session[30850]: Connection closed by authenticating user root 193.32.162.157 port 40506 [preauth]
Nov 25 19:46:02 compute-0 sshd-session[30852]: Connection closed by authenticating user root 193.32.162.157 port 51538 [preauth]
Nov 25 19:46:06 compute-0 sshd-session[30854]: Connection closed by authenticating user root 193.32.162.157 port 51568 [preauth]
Nov 25 19:46:09 compute-0 sshd-session[30856]: Connection closed by authenticating user root 193.32.162.157 port 51602 [preauth]
Nov 25 19:46:12 compute-0 sshd-session[30858]: Connection closed by authenticating user root 193.32.162.157 port 36042 [preauth]
Nov 25 19:46:15 compute-0 sshd-session[30860]: Invalid user ae from 193.32.162.157 port 36052
Nov 25 19:46:15 compute-0 sshd-session[30860]: Connection closed by invalid user ae 193.32.162.157 port 36052 [preauth]
Nov 25 19:46:19 compute-0 sshd-session[30862]: Connection closed by authenticating user root 193.32.162.157 port 36064 [preauth]
Nov 25 19:46:22 compute-0 sshd-session[30864]: Connection closed by authenticating user root 193.32.162.157 port 53488 [preauth]
Nov 25 19:46:25 compute-0 sshd-session[30866]: Connection closed by authenticating user root 193.32.162.157 port 53502 [preauth]
Nov 25 19:46:28 compute-0 sshd-session[30869]: Connection closed by authenticating user root 193.32.162.157 port 53510 [preauth]
Nov 25 19:46:32 compute-0 sshd-session[30871]: Connection closed by authenticating user root 193.32.162.157 port 34108 [preauth]
Nov 25 19:46:34 compute-0 sshd-session[30873]: Invalid user max from 193.32.162.157 port 34128
Nov 25 19:46:35 compute-0 sshd-session[30873]: Connection closed by invalid user max 193.32.162.157 port 34128 [preauth]
Nov 25 19:46:38 compute-0 sshd-session[30875]: Connection closed by authenticating user root 193.32.162.157 port 34130 [preauth]
Nov 25 19:46:41 compute-0 sshd-session[30877]: Connection closed by authenticating user root 193.32.162.157 port 58074 [preauth]
Nov 25 19:46:44 compute-0 sshd-session[30879]: Connection closed by authenticating user root 193.32.162.157 port 58080 [preauth]
Nov 25 19:46:47 compute-0 sshd-session[30881]: Connection closed by authenticating user root 193.32.162.157 port 58086 [preauth]
Nov 25 19:46:51 compute-0 sshd-session[30883]: Connection closed by authenticating user root 193.32.162.157 port 34132 [preauth]
Nov 25 19:46:54 compute-0 sshd-session[30885]: Connection closed by authenticating user root 193.32.162.157 port 34174 [preauth]
Nov 25 19:46:56 compute-0 sshd-session[30887]: Invalid user sysadmin from 193.32.162.157 port 34188
Nov 25 19:46:57 compute-0 sshd-session[30887]: Connection closed by invalid user sysadmin 193.32.162.157 port 34188 [preauth]
Nov 25 19:47:00 compute-0 sshd-session[30889]: Connection closed by authenticating user root 193.32.162.157 port 54122 [preauth]
Nov 25 19:47:04 compute-0 sshd-session[30891]: Connection closed by authenticating user root 193.32.162.157 port 54142 [preauth]
Nov 25 19:47:07 compute-0 sshd-session[30893]: Connection closed by authenticating user root 193.32.162.157 port 54176 [preauth]
Nov 25 19:47:10 compute-0 sshd-session[30895]: Connection closed by authenticating user root 193.32.162.157 port 48596 [preauth]
Nov 25 19:47:13 compute-0 sshd-session[30897]: Connection closed by authenticating user root 193.32.162.157 port 48626 [preauth]
Nov 25 19:47:17 compute-0 sshd-session[30899]: Connection closed by authenticating user root 193.32.162.157 port 48638 [preauth]
Nov 25 19:47:20 compute-0 sshd-session[30901]: Connection closed by authenticating user root 193.32.162.157 port 46360 [preauth]
Nov 25 19:47:23 compute-0 sshd-session[30903]: Connection closed by authenticating user root 193.32.162.157 port 46364 [preauth]
Nov 25 19:47:26 compute-0 sshd-session[30905]: Connection closed by authenticating user root 193.32.162.157 port 46380 [preauth]
Nov 25 19:47:29 compute-0 sshd-session[30907]: Connection closed by authenticating user root 193.32.162.157 port 47104 [preauth]
Nov 25 19:47:33 compute-0 sshd-session[30909]: Connection closed by authenticating user root 193.32.162.157 port 47118 [preauth]
Nov 25 19:47:36 compute-0 sshd-session[30912]: Connection closed by authenticating user root 193.32.162.157 port 47128 [preauth]
Nov 25 19:47:40 compute-0 sshd-session[30914]: Connection closed by authenticating user root 193.32.162.157 port 47136 [preauth]
Nov 25 19:47:43 compute-0 sshd-session[30916]: Connection closed by authenticating user root 193.32.162.157 port 46158 [preauth]
Nov 25 19:47:46 compute-0 sshd-session[30918]: Connection closed by authenticating user root 193.32.162.157 port 46172 [preauth]
Nov 25 19:47:49 compute-0 sshd-session[30920]: Connection closed by authenticating user root 193.32.162.157 port 46180 [preauth]
Nov 25 19:47:53 compute-0 sshd-session[30922]: Connection closed by authenticating user root 193.32.162.157 port 54384 [preauth]
Nov 25 19:47:56 compute-0 sshd-session[30924]: Connection closed by authenticating user root 193.32.162.157 port 54386 [preauth]
Nov 25 19:47:59 compute-0 sshd-session[30926]: Connection closed by authenticating user root 193.32.162.157 port 54402 [preauth]
Nov 25 19:48:02 compute-0 sshd-session[30928]: Connection closed by authenticating user root 193.32.162.157 port 37618 [preauth]
Nov 25 19:48:05 compute-0 sshd-session[30930]: Invalid user alice from 193.32.162.157 port 37632
Nov 25 19:48:06 compute-0 sshd-session[30930]: Connection closed by invalid user alice 193.32.162.157 port 37632 [preauth]
Nov 25 19:48:09 compute-0 sshd-session[30932]: Connection closed by authenticating user root 193.32.162.157 port 37638 [preauth]
Nov 25 19:48:12 compute-0 sshd-session[30934]: Connection closed by authenticating user root 193.32.162.157 port 45940 [preauth]
Nov 25 19:48:15 compute-0 sshd-session[30936]: Connection closed by authenticating user root 193.32.162.157 port 45968 [preauth]
Nov 25 19:48:19 compute-0 sshd-session[30938]: Connection closed by authenticating user root 193.32.162.157 port 45984 [preauth]
Nov 25 19:48:22 compute-0 sshd-session[30940]: Connection closed by authenticating user root 193.32.162.157 port 55072 [preauth]
Nov 25 19:48:25 compute-0 sshd-session[30942]: Connection closed by authenticating user root 193.32.162.157 port 55080 [preauth]
Nov 25 19:48:28 compute-0 sshd-session[30944]: Connection closed by authenticating user root 193.32.162.157 port 55112 [preauth]
Nov 25 19:48:31 compute-0 sshd-session[30946]: Connection closed by authenticating user root 193.32.162.157 port 42506 [preauth]
Nov 25 19:48:34 compute-0 sshd-session[30948]: Connection closed by authenticating user root 193.32.162.157 port 42524 [preauth]
Nov 25 19:48:38 compute-0 sshd-session[30950]: Connection closed by authenticating user root 193.32.162.157 port 42536 [preauth]
Nov 25 19:48:41 compute-0 sshd-session[30952]: Connection closed by authenticating user root 193.32.162.157 port 56588 [preauth]
Nov 25 19:48:44 compute-0 sshd-session[30954]: Connection closed by authenticating user root 193.32.162.157 port 56594 [preauth]
Nov 25 19:48:47 compute-0 sshd-session[30956]: Connection closed by authenticating user root 193.32.162.157 port 56600 [preauth]
Nov 25 19:48:50 compute-0 sshd-session[30958]: Connection closed by authenticating user root 193.32.162.157 port 37972 [preauth]
Nov 25 19:48:53 compute-0 sshd-session[30960]: Connection closed by authenticating user root 193.32.162.157 port 37976 [preauth]
Nov 25 19:48:56 compute-0 sshd-session[30962]: Connection closed by authenticating user root 193.32.162.157 port 37990 [preauth]
Nov 25 19:49:00 compute-0 sshd-session[30964]: Connection closed by authenticating user root 193.32.162.157 port 48756 [preauth]
Nov 25 19:49:03 compute-0 sshd-session[30966]: Connection closed by authenticating user root 193.32.162.157 port 48780 [preauth]
Nov 25 19:49:06 compute-0 sshd-session[30968]: Connection closed by authenticating user root 193.32.162.157 port 48788 [preauth]
Nov 25 19:49:09 compute-0 sshd-session[30970]: Connection closed by authenticating user root 193.32.162.157 port 48816 [preauth]
Nov 25 19:49:12 compute-0 sshd-session[30972]: Connection closed by authenticating user root 193.32.162.157 port 43434 [preauth]
Nov 25 19:49:16 compute-0 sshd-session[30974]: Connection closed by authenticating user root 193.32.162.157 port 43448 [preauth]
Nov 25 19:49:19 compute-0 sshd-session[30976]: Connection closed by authenticating user root 193.32.162.157 port 43474 [preauth]
Nov 25 19:49:22 compute-0 sshd-session[30978]: Connection closed by authenticating user root 193.32.162.157 port 48152 [preauth]
Nov 25 19:49:25 compute-0 sshd-session[30981]: Connection closed by authenticating user root 193.32.162.157 port 48172 [preauth]
Nov 25 19:49:27 compute-0 sshd-session[30983]: Invalid user jack from 193.32.162.157 port 48198
Nov 25 19:49:28 compute-0 sshd-session[30983]: Connection closed by invalid user jack 193.32.162.157 port 48198 [preauth]
Nov 25 19:49:31 compute-0 sshd-session[30985]: Connection closed by authenticating user root 193.32.162.157 port 47388 [preauth]
Nov 25 19:49:34 compute-0 sshd-session[30987]: Connection closed by authenticating user root 193.32.162.157 port 47398 [preauth]
Nov 25 19:49:37 compute-0 sshd-session[30989]: Connection closed by authenticating user root 193.32.162.157 port 47414 [preauth]
Nov 25 19:49:40 compute-0 sshd-session[30992]: Connection closed by authenticating user root 193.32.162.157 port 43674 [preauth]
Nov 25 19:49:43 compute-0 sshd-session[30994]: Connection closed by authenticating user root 193.32.162.157 port 43680 [preauth]
Nov 25 19:49:47 compute-0 sshd-session[30996]: Connection closed by authenticating user root 193.32.162.157 port 43688 [preauth]
Nov 25 19:49:49 compute-0 sshd-session[30998]: Connection closed by authenticating user root 193.32.162.157 port 34066 [preauth]
Nov 25 19:49:52 compute-0 sshd-session[31000]: Connection closed by authenticating user root 193.32.162.157 port 34068 [preauth]
Nov 25 19:50:36 compute-0 sshd-session[29932]: Received disconnect from 38.102.83.150 port 37710:11: disconnected by user
Nov 25 19:50:36 compute-0 sshd-session[29932]: Disconnected from user zuul 38.102.83.150 port 37710
Nov 25 19:50:36 compute-0 sshd-session[29929]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:50:36 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 19:50:36 compute-0 systemd[1]: session-7.scope: Consumed 5.714s CPU time.
Nov 25 19:50:36 compute-0 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Nov 25 19:50:36 compute-0 systemd-logind[789]: Removed session 7.
Nov 25 19:56:24 compute-0 sshd-session[31006]: Accepted publickey for zuul from 192.168.122.30 port 48392 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 19:56:24 compute-0 systemd-logind[789]: New session 8 of user zuul.
Nov 25 19:56:24 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 25 19:56:24 compute-0 sshd-session[31006]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:56:25 compute-0 python3.9[31159]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:56:26 compute-0 sudo[31338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgmpdowjaegzrexyqczlailaixchkvru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100586.1873722-32-127706346744739/AnsiballZ_command.py'
Nov 25 19:56:26 compute-0 sudo[31338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:26 compute-0 python3.9[31340]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:56:34 compute-0 sudo[31338]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:35 compute-0 sshd-session[31009]: Connection closed by 192.168.122.30 port 48392
Nov 25 19:56:35 compute-0 sshd-session[31006]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:56:35 compute-0 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Nov 25 19:56:35 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 19:56:35 compute-0 systemd[1]: session-8.scope: Consumed 8.869s CPU time.
Nov 25 19:56:35 compute-0 systemd-logind[789]: Removed session 8.
Nov 25 19:56:50 compute-0 sshd-session[31397]: Accepted publickey for zuul from 192.168.122.30 port 57380 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 19:56:50 compute-0 systemd-logind[789]: New session 9 of user zuul.
Nov 25 19:56:50 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 25 19:56:50 compute-0 sshd-session[31397]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 19:56:51 compute-0 python3.9[31550]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 19:56:52 compute-0 python3.9[31724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:56:53 compute-0 sudo[31874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfghlferojjinhatqzpkowdojgymkcmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100613.0094526-45-138098171652174/AnsiballZ_command.py'
Nov 25 19:56:53 compute-0 sudo[31874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:53 compute-0 python3.9[31876]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:56:53 compute-0 sudo[31874]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:55 compute-0 sudo[32027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczsagbwpxxmioqadtubdjkcglnfoupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100614.4421837-57-204274961755763/AnsiballZ_stat.py'
Nov 25 19:56:55 compute-0 sudo[32027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:55 compute-0 python3.9[32029]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:56:55 compute-0 sudo[32027]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:56 compute-0 sudo[32180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqbiroawfhwrhyfkhsnenvcdatcatwig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100615.544007-65-259349487449534/AnsiballZ_file.py'
Nov 25 19:56:56 compute-0 sudo[32180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:56 compute-0 python3.9[32182]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:56 compute-0 sudo[32180]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:57 compute-0 sudo[32332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srpryhwjvuytyzewsdmqqdxeotlahogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100616.603535-73-153996722036237/AnsiballZ_stat.py'
Nov 25 19:56:57 compute-0 sudo[32332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:57 compute-0 python3.9[32334]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:56:57 compute-0 sudo[32332]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:57 compute-0 sudo[32455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjktqmpyghhnqneymksnznlemufcjqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100616.603535-73-153996722036237/AnsiballZ_copy.py'
Nov 25 19:56:57 compute-0 sudo[32455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:58 compute-0 python3.9[32457]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100616.603535-73-153996722036237/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:58 compute-0 sudo[32455]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:58 compute-0 sudo[32607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxxsdmeaqldbulisguuivmmxmgqifony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100618.333159-88-249420911447160/AnsiballZ_setup.py'
Nov 25 19:56:58 compute-0 sudo[32607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:56:59 compute-0 python3.9[32609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:56:59 compute-0 sudo[32607]: pam_unix(sudo:session): session closed for user root
Nov 25 19:56:59 compute-0 sudo[32763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpkhbfgfzuzeosnupxipaeqifiwaxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100619.564194-96-31380949869969/AnsiballZ_file.py'
Nov 25 19:56:59 compute-0 sudo[32763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:57:00 compute-0 python3.9[32765]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:57:00 compute-0 sudo[32763]: pam_unix(sudo:session): session closed for user root
Nov 25 19:57:00 compute-0 sudo[32915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqncmgldcboluwyjewrevwsafzxdmbgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100620.4642339-105-27404115385045/AnsiballZ_file.py'
Nov 25 19:57:00 compute-0 sudo[32915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:57:01 compute-0 python3.9[32917]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:57:01 compute-0 sudo[32915]: pam_unix(sudo:session): session closed for user root
Nov 25 19:57:02 compute-0 python3.9[33067]: ansible-ansible.builtin.service_facts Invoked
Nov 25 19:57:07 compute-0 python3.9[33320]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:08 compute-0 python3.9[33470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:57:09 compute-0 python3.9[33624]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:57:10 compute-0 sudo[33780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-welhmphtjnlhmxvabpftleizvoxxtdqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100630.4490604-153-67747722078795/AnsiballZ_setup.py'
Nov 25 19:57:10 compute-0 sudo[33780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:57:11 compute-0 python3.9[33782]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:57:11 compute-0 sudo[33780]: pam_unix(sudo:session): session closed for user root
Nov 25 19:57:12 compute-0 sudo[33864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwpvlvrnbdrhoqfuhfltwebjiurawcti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100630.4490604-153-67747722078795/AnsiballZ_dnf.py'
Nov 25 19:57:12 compute-0 sudo[33864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:57:12 compute-0 python3.9[33866]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:57:12 compute-0 irqbalance[781]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 25 19:57:12 compute-0 irqbalance[781]: IRQ 26 affinity is now unmanaged
Nov 25 19:57:56 compute-0 systemd[1]: Reloading.
Nov 25 19:57:56 compute-0 systemd-rc-local-generator[34063]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:57:56 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 19:57:56 compute-0 systemd[1]: Reloading.
Nov 25 19:57:56 compute-0 systemd-rc-local-generator[34100]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:57:57 compute-0 systemd[1]: Starting dnf makecache...
Nov 25 19:57:57 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 19:57:57 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 19:57:57 compute-0 systemd[1]: Reloading.
Nov 25 19:57:57 compute-0 systemd-rc-local-generator[34142]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:57:57 compute-0 dnf[34110]: Failed determining last makecache time.
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-barbican-42b4c41831408a8e323 153 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 187 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-cinder-1c00d6490d88e436f26ef 168 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-stevedore-c4acc5639fd2329372142 177 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-observabilityclient-2f31846d73c 185 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-os-net-config-bbae2ed8a159b0435a473f38 185 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 170 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-designate-tests-tempest-347fdbc 176 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-glance-1fd12c29b339f30fe823e 179 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 167 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-manila-3c01b7181572c95dac462 173 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-whitebox-neutron-tests-tempest- 158 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-octavia-ba397f07a7331190208c 162 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-watcher-c014f81a8647287f6dcc 157 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-tcib-1124124ec06aadbac34f0d340b 167 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 185 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-swift-dc98a8463506ac520c469a 188 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 19:57:57 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-python-tempestconf-8515371b7cceebd4282 181 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 19:57:57 compute-0 dnf[34110]: delorean-openstack-heat-ui-013accbfd179753bc3f0 171 kB/s | 3.0 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: CentOS Stream 9 - BaseOS                         60 kB/s | 6.7 kB     00:00
Nov 25 19:57:57 compute-0 dnf[34110]: CentOS Stream 9 - AppStream                      76 kB/s | 6.8 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: CentOS Stream 9 - CRB                            29 kB/s | 6.5 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: CentOS Stream 9 - Extras packages                28 kB/s | 8.3 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: dlrn-antelope-testing                           100 kB/s | 3.0 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: dlrn-antelope-build-deps                        161 kB/s | 3.0 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: centos9-rabbitmq                                 89 kB/s | 3.0 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: centos9-storage                                  98 kB/s | 3.0 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: centos9-opstools                                114 kB/s | 3.0 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: NFV SIG OpenvSwitch                             102 kB/s | 3.0 kB     00:00
Nov 25 19:57:58 compute-0 dnf[34110]: repo-setup-centos-appstream                     151 kB/s | 4.4 kB     00:00
Nov 25 19:57:59 compute-0 dnf[34110]: repo-setup-centos-baseos                        157 kB/s | 3.9 kB     00:00
Nov 25 19:57:59 compute-0 dnf[34110]: repo-setup-centos-highavailability              143 kB/s | 3.9 kB     00:00
Nov 25 19:57:59 compute-0 dnf[34110]: repo-setup-centos-powertools                    182 kB/s | 4.3 kB     00:00
Nov 25 19:57:59 compute-0 dnf[34110]: Extra Packages for Enterprise Linux 9 - x86_64  272 kB/s |  34 kB     00:00
Nov 25 19:57:59 compute-0 dnf[34110]: Metadata cache created.
Nov 25 19:58:00 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 19:58:00 compute-0 systemd[1]: Finished dnf makecache.
Nov 25 19:58:00 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.905s CPU time.
Nov 25 19:59:01 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:59:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:59:01 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 19:59:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:59:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:59:01 compute-0 systemd[1]: Reloading.
Nov 25 19:59:02 compute-0 systemd-rc-local-generator[34509]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:59:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:59:02 compute-0 sudo[33864]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:03 compute-0 sudo[35415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzhdjrhibykrxunrbdgdvtwnsfpeyzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100742.8311238-165-126277548390505/AnsiballZ_command.py'
Nov 25 19:59:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:59:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:59:03 compute-0 sudo[35415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.421s CPU time.
Nov 25 19:59:03 compute-0 systemd[1]: run-r349bd63adf2a42cbb71d2cc92714b940.service: Deactivated successfully.
Nov 25 19:59:03 compute-0 python3.9[35418]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:04 compute-0 sudo[35415]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:05 compute-0 sudo[35697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faqjshlkmpgxbihaszataitgfzorhsqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100744.8539793-173-125384728835803/AnsiballZ_selinux.py'
Nov 25 19:59:05 compute-0 sudo[35697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:05 compute-0 python3.9[35699]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 19:59:05 compute-0 sudo[35697]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:06 compute-0 sudo[35849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdbyzqggoyepmuwyrymylilhgonmbuza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100746.3594823-184-17070710845963/AnsiballZ_command.py'
Nov 25 19:59:06 compute-0 sudo[35849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:06 compute-0 python3.9[35851]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 19:59:07 compute-0 sudo[35849]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:08 compute-0 sudo[36002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwyqglmopdhdkrkajqlrsmplfwvabof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100748.211149-192-173257426713824/AnsiballZ_file.py'
Nov 25 19:59:08 compute-0 sudo[36002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:08 compute-0 python3.9[36004]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:08 compute-0 sudo[36002]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:09 compute-0 sudo[36154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfboukortqczpknmpvhqawcnwsuyqrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100749.1876411-200-45540354428214/AnsiballZ_mount.py'
Nov 25 19:59:09 compute-0 sudo[36154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:10 compute-0 python3.9[36156]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 19:59:10 compute-0 sudo[36154]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:11 compute-0 sudo[36306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfuhdoodsxxoepxmmquupfaflzodyxhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100750.9167526-228-129410803088996/AnsiballZ_file.py'
Nov 25 19:59:11 compute-0 sudo[36306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:11 compute-0 python3.9[36308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:11 compute-0 sudo[36306]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:12 compute-0 sudo[36458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbxzssnofhashligsgpjylpudaeuktro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100751.7641678-236-247282446834602/AnsiballZ_stat.py'
Nov 25 19:59:12 compute-0 sudo[36458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:12 compute-0 python3.9[36460]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:12 compute-0 sudo[36458]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:12 compute-0 sudo[36581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prwxrsqqitrkdkiylupmlpdrnclnbdqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100751.7641678-236-247282446834602/AnsiballZ_copy.py'
Nov 25 19:59:12 compute-0 sudo[36581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:13 compute-0 python3.9[36583]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100751.7641678-236-247282446834602/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:13 compute-0 sudo[36581]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:14 compute-0 sudo[36733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leqoksfpoghjadtqhnqbusqbsdifpypf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100753.6871984-260-257508294233460/AnsiballZ_stat.py'
Nov 25 19:59:14 compute-0 sudo[36733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:16 compute-0 python3.9[36735]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:59:16 compute-0 sudo[36733]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:16 compute-0 sudo[36885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzqquzvcdrmlvowpbkrxtuxvdxwyaos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100756.3049889-268-33139227488550/AnsiballZ_command.py'
Nov 25 19:59:16 compute-0 sudo[36885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:16 compute-0 python3.9[36887]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:16 compute-0 sudo[36885]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:17 compute-0 sudo[37038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdnfzrjslxdrvadttbcbxrrrznisjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100757.150899-276-132114335276410/AnsiballZ_file.py'
Nov 25 19:59:17 compute-0 sudo[37038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:17 compute-0 python3.9[37040]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:17 compute-0 sudo[37038]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:18 compute-0 sudo[37190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgdygpjazaysotmabivuztrkohnskhhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100758.122957-287-158935200561128/AnsiballZ_getent.py'
Nov 25 19:59:18 compute-0 sudo[37190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:18 compute-0 python3.9[37192]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 19:59:18 compute-0 sudo[37190]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:59:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:59:19 compute-0 sudo[37344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxqoihxlxljwlyerrkgmmepccuetseta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100759.0154371-295-195559776173561/AnsiballZ_group.py'
Nov 25 19:59:19 compute-0 sudo[37344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:19 compute-0 python3.9[37346]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 19:59:19 compute-0 groupadd[37347]: group added to /etc/group: name=qemu, GID=107
Nov 25 19:59:19 compute-0 groupadd[37347]: group added to /etc/gshadow: name=qemu
Nov 25 19:59:19 compute-0 groupadd[37347]: new group: name=qemu, GID=107
Nov 25 19:59:19 compute-0 sudo[37344]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:20 compute-0 sudo[37502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evwowvjmtpvpczgufojychzkmvgmvnnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100759.993085-303-118022217651470/AnsiballZ_user.py'
Nov 25 19:59:20 compute-0 sudo[37502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:20 compute-0 python3.9[37504]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 19:59:20 compute-0 useradd[37506]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 19:59:20 compute-0 sudo[37502]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:21 compute-0 sudo[37662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkfqceollhoeddjgcpgqeooqwbesowah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100761.2375507-311-243481944687946/AnsiballZ_getent.py'
Nov 25 19:59:21 compute-0 sudo[37662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:21 compute-0 python3.9[37664]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 19:59:21 compute-0 sudo[37662]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:22 compute-0 sudo[37815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avvtplmajblabhtxfyelgmtyiaxhcitu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100762.1614947-319-17388186312158/AnsiballZ_group.py'
Nov 25 19:59:22 compute-0 sudo[37815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:22 compute-0 python3.9[37817]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 19:59:22 compute-0 groupadd[37818]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 25 19:59:22 compute-0 groupadd[37818]: group added to /etc/gshadow: name=hugetlbfs
Nov 25 19:59:22 compute-0 groupadd[37818]: new group: name=hugetlbfs, GID=42477
Nov 25 19:59:22 compute-0 sudo[37815]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:23 compute-0 sudo[37973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptesixihxacijyrlavrhaqrhwdoigrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100763.0891728-328-3760311177866/AnsiballZ_file.py'
Nov 25 19:59:23 compute-0 sudo[37973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:23 compute-0 python3.9[37975]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 19:59:23 compute-0 sudo[37973]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:24 compute-0 sudo[38125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvwtcuvnfwagybeepckaduemvpszlxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100764.1361-339-85522248053080/AnsiballZ_dnf.py'
Nov 25 19:59:24 compute-0 sudo[38125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:24 compute-0 python3.9[38127]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:59:26 compute-0 sudo[38125]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:26 compute-0 sudo[38278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlbmqjdmvuxyjanhmvuyvsyawogtopu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100766.5077045-347-256298892356729/AnsiballZ_file.py'
Nov 25 19:59:26 compute-0 sudo[38278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:27 compute-0 python3.9[38280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:27 compute-0 sudo[38278]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:27 compute-0 sudo[38430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqkcocgsvvtfrglnjprfeiledhqturp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100767.2809956-355-251519941974236/AnsiballZ_stat.py'
Nov 25 19:59:27 compute-0 sudo[38430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:27 compute-0 python3.9[38432]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:27 compute-0 sudo[38430]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:28 compute-0 sudo[38553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rskojxiislukktiwqdlzgrsrdicbicqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100767.2809956-355-251519941974236/AnsiballZ_copy.py'
Nov 25 19:59:28 compute-0 sudo[38553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:28 compute-0 python3.9[38555]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764100767.2809956-355-251519941974236/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:28 compute-0 sudo[38553]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:30 compute-0 sudo[38705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndsjirevfjkpdvnyhaitvvmcgakqnmcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100768.7694564-370-109504925803599/AnsiballZ_systemd.py'
Nov 25 19:59:30 compute-0 sudo[38705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:30 compute-0 python3.9[38707]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:59:30 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 19:59:30 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 19:59:30 compute-0 kernel: Bridge firewalling registered
Nov 25 19:59:30 compute-0 systemd-modules-load[38711]: Inserted module 'br_netfilter'
Nov 25 19:59:30 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 19:59:30 compute-0 sudo[38705]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:31 compute-0 sudo[38864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lexpjspphexdwodxahymbqsjvycofeqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100770.7919204-378-204132632105730/AnsiballZ_stat.py'
Nov 25 19:59:31 compute-0 sudo[38864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:31 compute-0 python3.9[38866]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:31 compute-0 sudo[38864]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:31 compute-0 sudo[38987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sodztlactmgwengsxbzwtngprjmdzhcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100770.7919204-378-204132632105730/AnsiballZ_copy.py'
Nov 25 19:59:31 compute-0 sudo[38987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:32 compute-0 python3.9[38989]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764100770.7919204-378-204132632105730/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:32 compute-0 sudo[38987]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:32 compute-0 sudo[39139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwocltmprhsfhlalvdvkozofjlyyghlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100772.5611446-396-230045054008700/AnsiballZ_dnf.py'
Nov 25 19:59:32 compute-0 sudo[39139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:33 compute-0 python3.9[39141]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:59:36 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 19:59:36 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 19:59:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:59:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:59:37 compute-0 systemd[1]: Reloading.
Nov 25 19:59:37 compute-0 systemd-rc-local-generator[39200]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:59:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:59:37 compute-0 sudo[39139]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:38 compute-0 python3.9[40315]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:59:39 compute-0 python3.9[41297]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 19:59:40 compute-0 python3.9[42021]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:59:41 compute-0 sudo[43025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tennigwhhavwmqbxcvngstndiwdzinra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100780.8775122-435-87999146573117/AnsiballZ_command.py'
Nov 25 19:59:41 compute-0 sudo[43025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:41 compute-0 python3.9[43058]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 19:59:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:59:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:59:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.623s CPU time.
Nov 25 19:59:41 compute-0 systemd[1]: run-r7aecbead824042c39eb02b7b5fa4ce5e.service: Deactivated successfully.
Nov 25 19:59:42 compute-0 systemd[1]: Starting Authorization Manager...
Nov 25 19:59:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 19:59:42 compute-0 polkitd[43533]: Started polkitd version 0.117
Nov 25 19:59:42 compute-0 polkitd[43533]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 19:59:42 compute-0 polkitd[43533]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 19:59:42 compute-0 polkitd[43533]: Finished loading, compiling and executing 2 rules
Nov 25 19:59:42 compute-0 polkitd[43533]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 25 19:59:42 compute-0 systemd[1]: Started Authorization Manager.
Nov 25 19:59:42 compute-0 sudo[43025]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:42 compute-0 sudo[43701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sehxtdrkoefskwygfwdtzbgrgdecwjle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100782.4850805-444-141712877426220/AnsiballZ_systemd.py'
Nov 25 19:59:42 compute-0 sudo[43701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:43 compute-0 python3.9[43703]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:59:43 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 19:59:43 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 19:59:43 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 19:59:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 19:59:43 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 19:59:43 compute-0 sudo[43701]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:44 compute-0 python3.9[43865]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 19:59:46 compute-0 sudo[44015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xokytzzedyialhglfvdddrfdpureqneh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100786.4212563-501-20791338243364/AnsiballZ_systemd.py'
Nov 25 19:59:46 compute-0 sudo[44015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:47 compute-0 python3.9[44017]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:59:47 compute-0 systemd[1]: Reloading.
Nov 25 19:59:47 compute-0 systemd-rc-local-generator[44037]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:59:47 compute-0 sudo[44015]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:47 compute-0 sudo[44203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgaodtzrfvkknwlujxxxuzxyycxkvbwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100787.5564697-501-239503131124231/AnsiballZ_systemd.py'
Nov 25 19:59:47 compute-0 sudo[44203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:48 compute-0 python3.9[44205]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:59:48 compute-0 systemd[1]: Reloading.
Nov 25 19:59:48 compute-0 systemd-rc-local-generator[44232]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:59:48 compute-0 sudo[44203]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:49 compute-0 sudo[44392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vadnvpnosxsrgrafbyjmljuwkjbkezya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100788.8290977-517-103873430820582/AnsiballZ_command.py'
Nov 25 19:59:49 compute-0 sudo[44392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:49 compute-0 python3.9[44394]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:49 compute-0 sudo[44392]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:50 compute-0 sudo[44545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnjlkmxekrabareduenllxwwlcxpdiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100789.6201763-525-254656685051074/AnsiballZ_command.py'
Nov 25 19:59:50 compute-0 sudo[44545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:50 compute-0 python3.9[44547]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:50 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 19:59:50 compute-0 sudo[44545]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:50 compute-0 sudo[44698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdydfsdxjsdqajcycglpenyuldygttcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100790.3975134-533-12995325435878/AnsiballZ_command.py'
Nov 25 19:59:50 compute-0 sudo[44698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:51 compute-0 python3.9[44700]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:52 compute-0 sudo[44698]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:53 compute-0 sudo[44860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whzqyawdoteltkoffgxffzwxwafkvmhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100792.7300198-541-188105680363891/AnsiballZ_command.py'
Nov 25 19:59:53 compute-0 sudo[44860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:53 compute-0 python3.9[44862]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:53 compute-0 sudo[44860]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:53 compute-0 sudo[45013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slwrbalsqwrifmpcefbequzpugykimvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100793.5002215-549-279390643837466/AnsiballZ_systemd.py'
Nov 25 19:59:53 compute-0 sudo[45013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 19:59:54 compute-0 python3.9[45015]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:59:54 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 19:59:54 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 19:59:54 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 19:59:54 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 25 19:59:54 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 19:59:54 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 25 19:59:54 compute-0 sudo[45013]: pam_unix(sudo:session): session closed for user root
Nov 25 19:59:54 compute-0 sshd-session[31400]: Connection closed by 192.168.122.30 port 57380
Nov 25 19:59:54 compute-0 sshd-session[31397]: pam_unix(sshd:session): session closed for user zuul
Nov 25 19:59:54 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 19:59:54 compute-0 systemd[1]: session-9.scope: Consumed 2min 23.872s CPU time.
Nov 25 19:59:54 compute-0 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Nov 25 19:59:54 compute-0 systemd-logind[789]: Removed session 9.
Nov 25 20:00:00 compute-0 sshd-session[45045]: Accepted publickey for zuul from 192.168.122.30 port 38724 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:00:00 compute-0 systemd-logind[789]: New session 10 of user zuul.
Nov 25 20:00:00 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 25 20:00:00 compute-0 sshd-session[45045]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:00:01 compute-0 python3.9[45198]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:00:02 compute-0 sudo[45352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lojkypvqyjctyvjyqorlpsfojohdfgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100802.27732-36-132488807390273/AnsiballZ_getent.py'
Nov 25 20:00:02 compute-0 sudo[45352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:03 compute-0 python3.9[45354]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 20:00:03 compute-0 sudo[45352]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:03 compute-0 sudo[45505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obhcwmiaeyidzkoxoeystpvgnnqqafxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100803.2436101-44-223375536194856/AnsiballZ_group.py'
Nov 25 20:00:03 compute-0 sudo[45505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:03 compute-0 python3.9[45507]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 20:00:03 compute-0 groupadd[45508]: group added to /etc/group: name=openvswitch, GID=42476
Nov 25 20:00:03 compute-0 groupadd[45508]: group added to /etc/gshadow: name=openvswitch
Nov 25 20:00:03 compute-0 groupadd[45508]: new group: name=openvswitch, GID=42476
Nov 25 20:00:03 compute-0 sudo[45505]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:04 compute-0 sudo[45663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-newhmfbjgcmkjxsrfqpugoeqaqijoadx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100804.157334-52-207240743939841/AnsiballZ_user.py'
Nov 25 20:00:04 compute-0 sudo[45663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:04 compute-0 python3.9[45665]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 20:00:04 compute-0 useradd[45667]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 20:00:04 compute-0 useradd[45667]: add 'openvswitch' to group 'hugetlbfs'
Nov 25 20:00:04 compute-0 useradd[45667]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 25 20:00:05 compute-0 sudo[45663]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:05 compute-0 sudo[45823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizbsccnnnabjihnpxhybfwookdaboti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100805.334723-62-35845510665035/AnsiballZ_setup.py'
Nov 25 20:00:05 compute-0 sudo[45823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:06 compute-0 python3.9[45825]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:00:06 compute-0 sudo[45823]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:06 compute-0 sudo[45907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlgchvfbqtnpskyxyidhscgzycesave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100805.334723-62-35845510665035/AnsiballZ_dnf.py'
Nov 25 20:00:06 compute-0 sudo[45907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:06 compute-0 python3.9[45909]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 20:00:09 compute-0 sudo[45907]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:10 compute-0 sudo[46070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfmbbwzkgzgvdisofglecweoyhyrnios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100809.9468677-76-86242436543692/AnsiballZ_dnf.py'
Nov 25 20:00:10 compute-0 sudo[46070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:10 compute-0 python3.9[46072]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:00:21 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:00:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:00:21 compute-0 groupadd[46095]: group added to /etc/group: name=unbound, GID=993
Nov 25 20:00:21 compute-0 groupadd[46095]: group added to /etc/gshadow: name=unbound
Nov 25 20:00:21 compute-0 groupadd[46095]: new group: name=unbound, GID=993
Nov 25 20:00:21 compute-0 useradd[46102]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 25 20:00:21 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 20:00:21 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 20:00:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:00:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:00:23 compute-0 systemd[1]: Reloading.
Nov 25 20:00:23 compute-0 systemd-rc-local-generator[46601]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:00:23 compute-0 systemd-sysv-generator[46605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:00:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 20:00:24 compute-0 sudo[46070]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:00:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:00:24 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.079s CPU time.
Nov 25 20:00:24 compute-0 systemd[1]: run-r4c620ff77b674cef8d4305acb890bd0d.service: Deactivated successfully.
Nov 25 20:00:25 compute-0 sudo[47169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pffsicrxadudppocldpsnfwrtzboxguv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100824.423404-84-181640399725364/AnsiballZ_systemd.py'
Nov 25 20:00:25 compute-0 sudo[47169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:25 compute-0 python3.9[47171]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:00:25 compute-0 systemd[1]: Reloading.
Nov 25 20:00:25 compute-0 systemd-sysv-generator[47205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:00:25 compute-0 systemd-rc-local-generator[47202]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:00:25 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 20:00:25 compute-0 chown[47213]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 20:00:25 compute-0 ovs-ctl[47218]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 20:00:25 compute-0 ovs-ctl[47218]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 20:00:26 compute-0 ovs-ctl[47218]: Starting ovsdb-server [  OK  ]
Nov 25 20:00:26 compute-0 ovs-vsctl[47268]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 20:00:26 compute-0 ovs-vsctl[47288]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 20:00:26 compute-0 ovs-ctl[47218]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 20:00:26 compute-0 ovs-ctl[47218]: Enabling remote OVSDB managers [  OK  ]
Nov 25 20:00:26 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 20:00:26 compute-0 ovs-vsctl[47294]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 20:00:26 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 20:00:26 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 20:00:26 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 20:00:26 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 20:00:26 compute-0 ovs-ctl[47339]: Inserting openvswitch module [  OK  ]
Nov 25 20:00:26 compute-0 ovs-ctl[47308]: Starting ovs-vswitchd [  OK  ]
Nov 25 20:00:26 compute-0 ovs-vsctl[47360]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 20:00:26 compute-0 ovs-ctl[47308]: Enabling remote OVSDB managers [  OK  ]
Nov 25 20:00:26 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 20:00:26 compute-0 systemd[1]: Starting Open vSwitch...
Nov 25 20:00:26 compute-0 systemd[1]: Finished Open vSwitch.
Nov 25 20:00:26 compute-0 sudo[47169]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:27 compute-0 python3.9[47512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:00:28 compute-0 sudo[47662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkgsuyfhjtunsbdxsyqbpjcsfmuyjcxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100827.8895888-102-263791926317933/AnsiballZ_sefcontext.py'
Nov 25 20:00:28 compute-0 sudo[47662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:28 compute-0 python3.9[47664]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 20:00:29 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:00:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:00:30 compute-0 sudo[47662]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:31 compute-0 python3.9[47819]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:00:32 compute-0 sudo[47975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufgppjzowcjyxowvawqhnoyuhjxygvsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100831.6722143-120-196152541625139/AnsiballZ_dnf.py'
Nov 25 20:00:32 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 20:00:32 compute-0 sudo[47975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:32 compute-0 python3.9[47977]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:00:33 compute-0 sudo[47975]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:34 compute-0 sudo[48128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipqiovgehozfoepyafobagcntipxrbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100833.776156-128-82089875830764/AnsiballZ_command.py'
Nov 25 20:00:34 compute-0 sudo[48128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:34 compute-0 python3.9[48130]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:35 compute-0 sudo[48128]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:36 compute-0 sudo[48415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzqbobiiobzwoqgacixunjfhhqkgmjux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100835.5774548-136-22010274000058/AnsiballZ_file.py'
Nov 25 20:00:36 compute-0 sudo[48415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:36 compute-0 python3.9[48417]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 20:00:36 compute-0 sudo[48415]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:37 compute-0 python3.9[48567]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:00:37 compute-0 sudo[48719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqghpthhexsrjokmvagkjtvstdroqejm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100837.465871-152-53924738291509/AnsiballZ_dnf.py'
Nov 25 20:00:37 compute-0 sudo[48719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:38 compute-0 python3.9[48721]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:00:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:00:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:00:40 compute-0 systemd[1]: Reloading.
Nov 25 20:00:40 compute-0 systemd-rc-local-generator[48760]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:00:40 compute-0 systemd-sysv-generator[48763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:00:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 20:00:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:00:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:00:40 compute-0 systemd[1]: run-raa7f065aedf54b79b90dc8286d3d0950.service: Deactivated successfully.
Nov 25 20:00:40 compute-0 sudo[48719]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:41 compute-0 sudo[49036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caddcadkrzxupgxagrhiaalfdsmmdqar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100840.9209738-160-73649733150354/AnsiballZ_systemd.py'
Nov 25 20:00:41 compute-0 sudo[49036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:41 compute-0 python3.9[49038]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:00:41 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 20:00:41 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 20:00:41 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 20:00:41 compute-0 systemd[1]: Stopping Network Manager...
Nov 25 20:00:41 compute-0 NetworkManager[7181]: <info>  [1764100841.6660] caught SIGTERM, shutting down normally.
Nov 25 20:00:41 compute-0 NetworkManager[7181]: <info>  [1764100841.6674] dhcp4 (eth0): canceled DHCP transaction
Nov 25 20:00:41 compute-0 NetworkManager[7181]: <info>  [1764100841.6674] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 20:00:41 compute-0 NetworkManager[7181]: <info>  [1764100841.6675] dhcp4 (eth0): state changed no lease
Nov 25 20:00:41 compute-0 NetworkManager[7181]: <info>  [1764100841.6677] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 20:00:41 compute-0 NetworkManager[7181]: <info>  [1764100841.6739] exiting (success)
Nov 25 20:00:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 20:00:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 20:00:41 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 20:00:41 compute-0 systemd[1]: Stopped Network Manager.
Nov 25 20:00:41 compute-0 systemd[1]: NetworkManager.service: Consumed 12.343s CPU time, 4.3M memory peak, read 0B from disk, written 48.5K to disk.
Nov 25 20:00:41 compute-0 systemd[1]: Starting Network Manager...
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.7682] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:d0551c86-76fe-4da9-b9a1-a5fabb73b624)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.7684] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.7756] manager[0x555c6c070090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 20:00:41 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 20:00:41 compute-0 systemd[1]: Started Hostname Service.
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.8954] hostname: hostname: using hostnamed
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.8955] hostname: static hostname changed from (none) to "compute-0"
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.8962] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.8968] manager[0x555c6c070090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.8969] manager[0x555c6c070090]: rfkill: WWAN hardware radio set enabled
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9004] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9019] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9020] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9021] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9021] manager: Networking is enabled by state file
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9024] settings: Loaded settings plugin: keyfile (internal)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9030] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9072] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9086] dhcp: init: Using DHCP client 'internal'
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9091] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9100] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9109] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9122] device (lo): Activation: starting connection 'lo' (907c96cc-9d5c-4708-9196-ba7e632419fa)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9132] device (eth0): carrier: link connected
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9139] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9146] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9146] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9155] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9166] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9175] device (eth1): carrier: link connected
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9183] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9191] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (29a28c5d-7338-527a-8ab3-91e82e4be558) (indicated)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9192] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9200] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9211] device (eth1): Activation: starting connection 'ci-private-network' (29a28c5d-7338-527a-8ab3-91e82e4be558)
Nov 25 20:00:41 compute-0 systemd[1]: Started Network Manager.
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9221] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9245] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9250] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9253] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9255] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9260] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9263] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9268] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9275] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9286] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9292] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9306] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9330] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9339] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9341] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9346] device (lo): Activation: successful, device activated.
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9353] dhcp4 (eth0): state changed new lease, address=38.102.83.113
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9360] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 20:00:41 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9429] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9435] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9443] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9448] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9453] device (eth1): Activation: successful, device activated.
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9463] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9466] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9470] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9473] device (eth0): Activation: successful, device activated.
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9479] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 20:00:41 compute-0 NetworkManager[49051]: <info>  [1764100841.9482] manager: startup complete
Nov 25 20:00:41 compute-0 sudo[49036]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:41 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 25 20:00:42 compute-0 sudo[49263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytbtktrohqjtadvfvkgyoyhqmuuqjovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100842.2258523-168-186105925149934/AnsiballZ_dnf.py'
Nov 25 20:00:42 compute-0 sudo[49263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:42 compute-0 python3.9[49265]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:00:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:00:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:00:47 compute-0 systemd[1]: Reloading.
Nov 25 20:00:47 compute-0 systemd-rc-local-generator[49312]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:00:47 compute-0 systemd-sysv-generator[49316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:00:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 20:00:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:00:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:00:48 compute-0 systemd[1]: run-rd18895b71ef146488e8603c8ea87d4c7.service: Deactivated successfully.
Nov 25 20:00:48 compute-0 sudo[49263]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:49 compute-0 sudo[49722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvtlbyyvhygtclbyuvjrzlesvpvupios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100848.7418942-180-65891355063363/AnsiballZ_stat.py'
Nov 25 20:00:49 compute-0 sudo[49722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:49 compute-0 python3.9[49724]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:00:49 compute-0 sudo[49722]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:50 compute-0 sudo[49874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkaumxxxonrlwxzwzkzpxgbmppgzvukh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100849.52995-189-107942565663287/AnsiballZ_ini_file.py'
Nov 25 20:00:50 compute-0 sudo[49874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:50 compute-0 python3.9[49876]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:50 compute-0 sudo[49874]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:50 compute-0 sudo[50028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vznmjmtyirrwcnulzkwkupudsjmbjmsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100850.6345887-199-178642200462175/AnsiballZ_ini_file.py'
Nov 25 20:00:51 compute-0 sudo[50028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:51 compute-0 python3.9[50030]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:51 compute-0 sudo[50028]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:51 compute-0 sudo[50180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwnbrobyusbnjhasjsdfjzbzsryrcto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100851.4404964-199-29860360189060/AnsiballZ_ini_file.py'
Nov 25 20:00:51 compute-0 sudo[50180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:52 compute-0 python3.9[50182]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:52 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 20:00:52 compute-0 sudo[50180]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:52 compute-0 sudo[50332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdxfdslzmzllphfodsgpvrwjpqyjxrix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100852.3302479-214-12577681541980/AnsiballZ_ini_file.py'
Nov 25 20:00:52 compute-0 sudo[50332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:52 compute-0 python3.9[50334]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:52 compute-0 sudo[50332]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:53 compute-0 sudo[50485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqiytqsiaomjmtyxnfkggubvlartviqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100853.1293044-214-125767430101275/AnsiballZ_ini_file.py'
Nov 25 20:00:53 compute-0 sudo[50485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:53 compute-0 sshd-session[50411]: Connection closed by 176.32.195.85 port 60023
Nov 25 20:00:53 compute-0 python3.9[50487]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:53 compute-0 sudo[50485]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:54 compute-0 sudo[50637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inocbetpntqwvmsxhpnnhmbpnrwhymno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100854.0983608-229-156304872627222/AnsiballZ_stat.py'
Nov 25 20:00:54 compute-0 sudo[50637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:54 compute-0 python3.9[50639]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:54 compute-0 sudo[50637]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:55 compute-0 sudo[50760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommsriijphaomoaojprutyqigrwectzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100854.0983608-229-156304872627222/AnsiballZ_copy.py'
Nov 25 20:00:55 compute-0 sudo[50760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:55 compute-0 python3.9[50762]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100854.0983608-229-156304872627222/.source _original_basename=.g_mt9kv2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:55 compute-0 sudo[50760]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:56 compute-0 sudo[50912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlycpxhjobwccyspqjrpabnuqmdjqzfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100855.7098246-244-141166986912736/AnsiballZ_file.py'
Nov 25 20:00:56 compute-0 sudo[50912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:56 compute-0 python3.9[50914]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:56 compute-0 sudo[50912]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:57 compute-0 sudo[51064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtynlthvuzxntnqtsvlolqvcmbroauzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100856.5047193-252-52382631000878/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 25 20:00:57 compute-0 sudo[51064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:57 compute-0 python3.9[51066]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 20:00:57 compute-0 sudo[51064]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:57 compute-0 sudo[51216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyhtkhzyrbfmfneyehhwtroaskqqgsds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100857.469262-261-201797843423054/AnsiballZ_file.py'
Nov 25 20:00:57 compute-0 sudo[51216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:58 compute-0 python3.9[51218]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:58 compute-0 sudo[51216]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:58 compute-0 sudo[51368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzutozsfsissssfltpizqhqoghmazjhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100858.4797711-271-55128600128043/AnsiballZ_stat.py'
Nov 25 20:00:58 compute-0 sudo[51368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:59 compute-0 sudo[51368]: pam_unix(sudo:session): session closed for user root
Nov 25 20:00:59 compute-0 sudo[51491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmehktehecntgptevbjwkuyrqdzpgrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100858.4797711-271-55128600128043/AnsiballZ_copy.py'
Nov 25 20:00:59 compute-0 sudo[51491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:00:59 compute-0 sudo[51491]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:00 compute-0 sudo[51643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icougemdxpirvhycdwocdxtaryulzpar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100860.0790539-286-115472089571684/AnsiballZ_slurp.py'
Nov 25 20:01:00 compute-0 sudo[51643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:00 compute-0 python3.9[51646]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 20:01:00 compute-0 sudo[51643]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:01 compute-0 sudo[51819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlimaxefppgqxjujxipekwxwcjpohll ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100861.0149903-295-40426471337704/async_wrapper.py j388077681177 300 /home/zuul/.ansible/tmp/ansible-tmp-1764100861.0149903-295-40426471337704/AnsiballZ_edpm_os_net_config.py _'
Nov 25 20:01:01 compute-0 sudo[51819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:01 compute-0 ansible-async_wrapper.py[51821]: Invoked with j388077681177 300 /home/zuul/.ansible/tmp/ansible-tmp-1764100861.0149903-295-40426471337704/AnsiballZ_edpm_os_net_config.py _
Nov 25 20:01:01 compute-0 ansible-async_wrapper.py[51824]: Starting module and watcher
Nov 25 20:01:01 compute-0 ansible-async_wrapper.py[51824]: Start watching 51825 (300)
Nov 25 20:01:01 compute-0 ansible-async_wrapper.py[51825]: Start module (51825)
Nov 25 20:01:01 compute-0 ansible-async_wrapper.py[51821]: Return async_wrapper task started.
Nov 25 20:01:01 compute-0 CROND[51828]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 20:01:02 compute-0 sudo[51819]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:02 compute-0 run-parts[51831]: (/etc/cron.hourly) starting 0anacron
Nov 25 20:01:02 compute-0 anacron[51839]: Anacron started on 2025-11-25
Nov 25 20:01:02 compute-0 anacron[51839]: Will run job `cron.daily' in 21 min.
Nov 25 20:01:02 compute-0 anacron[51839]: Will run job `cron.weekly' in 41 min.
Nov 25 20:01:02 compute-0 anacron[51839]: Will run job `cron.monthly' in 61 min.
Nov 25 20:01:02 compute-0 anacron[51839]: Jobs will be executed sequentially
Nov 25 20:01:02 compute-0 run-parts[51841]: (/etc/cron.hourly) finished 0anacron
Nov 25 20:01:02 compute-0 CROND[51827]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 20:01:02 compute-0 python3.9[51826]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 20:01:02 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 20:01:02 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 20:01:02 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 20:01:02 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 20:01:02 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.5786] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.5806] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6521] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6523] audit: op="connection-add" uuid="a8bf833a-425c-4ce1-a124-d7c11765447d" name="br-ex-br" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6544] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6545] audit: op="connection-add" uuid="a4e2d1af-6a7d-4d4f-827c-f4cbc345e5b2" name="br-ex-port" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6557] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6558] audit: op="connection-add" uuid="81450396-0904-45d2-8811-c3dddd84fdf9" name="eth1-port" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6572] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6573] audit: op="connection-add" uuid="e46cbf84-6456-4812-aad5-dc17fe565d90" name="vlan20-port" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6586] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6587] audit: op="connection-add" uuid="b7d646f5-4bbe-4a4b-96b8-be0f2bbf1c2c" name="vlan21-port" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6600] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6602] audit: op="connection-add" uuid="550d5eab-ee13-46e0-a190-3b01ed04dee1" name="vlan22-port" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6614] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6615] audit: op="connection-add" uuid="839a99a6-3739-4a83-9a49-194bb4081406" name="vlan23-port" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6636] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6653] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6654] audit: op="connection-add" uuid="4ac3157f-0fbc-4601-9737-8fa21043155e" name="br-ex-if" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6720] audit: op="connection-update" uuid="29a28c5d-7338-527a-8ab3-91e82e4be558" name="ci-private-network" args="connection.controller,connection.port-type,connection.slave-type,connection.master,connection.timestamp,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routes,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.routes,ipv4.routing-rules,ovs-external-ids.data,ovs-interface.type" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6743] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6745] audit: op="connection-add" uuid="5e23ecdb-2406-40fe-b3bb-fb8276ebd236" name="vlan20-if" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6768] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6770] audit: op="connection-add" uuid="2b6c782b-6c25-4d92-ba25-fbe64c0977e2" name="vlan21-if" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6791] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6793] audit: op="connection-add" uuid="69598e05-1db4-4459-abe8-cbcc2f02942f" name="vlan22-if" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6816] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6819] audit: op="connection-add" uuid="33de581a-0e41-499a-a23e-793287e2caa2" name="vlan23-if" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6835] audit: op="connection-delete" uuid="9222631e-5368-3ea4-b024-56475051c0e7" name="Wired connection 1" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6849] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6862] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6866] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a8bf833a-425c-4ce1-a124-d7c11765447d)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6867] audit: op="connection-activate" uuid="a8bf833a-425c-4ce1-a124-d7c11765447d" name="br-ex-br" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6870] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6878] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6883] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a4e2d1af-6a7d-4d4f-827c-f4cbc345e5b2)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6885] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6892] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6896] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (81450396-0904-45d2-8811-c3dddd84fdf9)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6899] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6906] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6911] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e46cbf84-6456-4812-aad5-dc17fe565d90)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6913] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6921] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6925] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b7d646f5-4bbe-4a4b-96b8-be0f2bbf1c2c)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6927] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6935] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6940] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (550d5eab-ee13-46e0-a190-3b01ed04dee1)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6942] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6951] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6956] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (839a99a6-3739-4a83-9a49-194bb4081406)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6957] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6961] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6964] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6973] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6980] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6986] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (4ac3157f-0fbc-4601-9737-8fa21043155e)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6987] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6991] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6993] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6995] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.6996] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7010] device (eth1): disconnecting for new activation request.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7011] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7015] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7017] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7018] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7021] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7026] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7031] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (5e23ecdb-2406-40fe-b3bb-fb8276ebd236)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7032] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7035] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7037] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7039] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7042] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7049] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7055] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2b6c782b-6c25-4d92-ba25-fbe64c0977e2)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7056] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7062] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7064] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7066] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7071] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7078] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7082] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (69598e05-1db4-4459-abe8-cbcc2f02942f)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7083] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7086] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7089] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7090] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7094] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7100] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7105] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (33de581a-0e41-499a-a23e-793287e2caa2)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7105] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7109] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7111] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7112] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7114] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7135] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7137] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7141] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7144] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7542] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7546] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7550] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7553] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7555] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7560] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7564] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7567] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7568] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7573] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7577] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7581] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7583] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7587] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 systemd-udevd[51848]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:01:04 compute-0 kernel: Timeout policy base is empty
Nov 25 20:01:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7591] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7595] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7596] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7601] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7605] dhcp4 (eth0): canceled DHCP transaction
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7605] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7605] dhcp4 (eth0): state changed no lease
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7607] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7617] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7621] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51842 uid=0 result="fail" reason="Device is not activated"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7649] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7657] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7666] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7669] dhcp4 (eth0): state changed new lease, address=38.102.83.113
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7674] device (eth1): disconnecting for new activation request.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7675] audit: op="connection-activate" uuid="29a28c5d-7338-527a-8ab3-91e82e4be558" name="ci-private-network" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7716] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7726] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51842 uid=0 result="success"
Nov 25 20:01:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7822] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7976] device (eth1): Activation: starting connection 'ci-private-network' (29a28c5d-7338-527a-8ab3-91e82e4be558)
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7991] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.7994] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8002] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8004] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8005] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8006] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8007] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8009] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8010] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8021] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8027] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8031] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8035] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8040] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8043] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8048] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8051] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8056] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8059] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8064] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8067] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8072] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8075] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8080] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8085] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8093] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 kernel: br-ex: entered promiscuous mode
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8139] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8146] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8151] device (eth1): Activation: successful, device activated.
Nov 25 20:01:04 compute-0 kernel: vlan22: entered promiscuous mode
Nov 25 20:01:04 compute-0 systemd-udevd[51847]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:01:04 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8276] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8287] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8341] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 20:01:04 compute-0 kernel: vlan23: entered promiscuous mode
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8366] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8371] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 systemd-udevd[51846]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8383] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8392] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8405] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8407] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8414] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 kernel: vlan20: entered promiscuous mode
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8494] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 20:01:04 compute-0 kernel: vlan21: entered promiscuous mode
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8523] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8571] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8574] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8583] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8593] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8602] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8619] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8626] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8632] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8638] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8645] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8695] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8698] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 20:01:04 compute-0 NetworkManager[49051]: <info>  [1764100864.8708] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 20:01:05 compute-0 sudo[52197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dooxzkqmsgcwinqnzxunsibfmbjahuib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100865.1371698-295-230733893925916/AnsiballZ_async_status.py'
Nov 25 20:01:05 compute-0 sudo[52197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:05 compute-0 python3.9[52199]: ansible-ansible.legacy.async_status Invoked with jid=j388077681177.51821 mode=status _async_dir=/root/.ansible_async
Nov 25 20:01:05 compute-0 sudo[52197]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:06 compute-0 NetworkManager[49051]: <info>  [1764100866.0024] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51842 uid=0 result="success"
Nov 25 20:01:06 compute-0 NetworkManager[49051]: <info>  [1764100866.2733] checkpoint[0x555c6c046950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 20:01:06 compute-0 NetworkManager[49051]: <info>  [1764100866.2736] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51842 uid=0 result="success"
Nov 25 20:01:06 compute-0 NetworkManager[49051]: <info>  [1764100866.7133] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51842 uid=0 result="success"
Nov 25 20:01:06 compute-0 NetworkManager[49051]: <info>  [1764100866.7151] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51842 uid=0 result="success"
Nov 25 20:01:06 compute-0 ansible-async_wrapper.py[51824]: 51825 still running (300)
Nov 25 20:01:07 compute-0 NetworkManager[49051]: <info>  [1764100867.0011] audit: op="networking-control" arg="global-dns-configuration" pid=51842 uid=0 result="success"
Nov 25 20:01:07 compute-0 NetworkManager[49051]: <info>  [1764100867.0039] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 20:01:07 compute-0 NetworkManager[49051]: <info>  [1764100867.0065] audit: op="networking-control" arg="global-dns-configuration" pid=51842 uid=0 result="success"
Nov 25 20:01:07 compute-0 NetworkManager[49051]: <info>  [1764100867.0511] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51842 uid=0 result="success"
Nov 25 20:01:07 compute-0 NetworkManager[49051]: <info>  [1764100867.1808] checkpoint[0x555c6c046a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 20:01:07 compute-0 NetworkManager[49051]: <info>  [1764100867.1811] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51842 uid=0 result="success"
Nov 25 20:01:07 compute-0 ansible-async_wrapper.py[51825]: Module complete (51825)
Nov 25 20:01:09 compute-0 sudo[52304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bszmzjbtrwdsnhuvvsdxyhpdcnlhoxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100865.1371698-295-230733893925916/AnsiballZ_async_status.py'
Nov 25 20:01:09 compute-0 sudo[52304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:09 compute-0 python3.9[52306]: ansible-ansible.legacy.async_status Invoked with jid=j388077681177.51821 mode=status _async_dir=/root/.ansible_async
Nov 25 20:01:09 compute-0 sudo[52304]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:09 compute-0 sudo[52403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzzhreqqzoksmqabqwaqqmuimxdmiqua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100865.1371698-295-230733893925916/AnsiballZ_async_status.py'
Nov 25 20:01:09 compute-0 sudo[52403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:10 compute-0 python3.9[52405]: ansible-ansible.legacy.async_status Invoked with jid=j388077681177.51821 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 20:01:10 compute-0 sudo[52403]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:10 compute-0 sudo[52555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvnwnswgwigbmtbefiewxzcjhjfiwpud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100870.3545454-322-75035494568759/AnsiballZ_stat.py'
Nov 25 20:01:10 compute-0 sudo[52555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:10 compute-0 python3.9[52557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:10 compute-0 sudo[52555]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:11 compute-0 sudo[52678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdvzbirbizyikjhiltzgbzpiagpvbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100870.3545454-322-75035494568759/AnsiballZ_copy.py'
Nov 25 20:01:11 compute-0 sudo[52678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:11 compute-0 python3.9[52680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100870.3545454-322-75035494568759/.source.returncode _original_basename=.lul90tvf follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:11 compute-0 sudo[52678]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:11 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 20:01:11 compute-0 ansible-async_wrapper.py[51824]: Done in kid B.
Nov 25 20:01:12 compute-0 sudo[52832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhrrstoaodxuhxdiwyvufguldnnlzoms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100871.8881297-338-178127313254004/AnsiballZ_stat.py'
Nov 25 20:01:12 compute-0 sudo[52832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:12 compute-0 python3.9[52834]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:12 compute-0 sudo[52832]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:12 compute-0 sudo[52956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lalpfpxewtopjtoweaexvgjrftrkzeov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100871.8881297-338-178127313254004/AnsiballZ_copy.py'
Nov 25 20:01:12 compute-0 sudo[52956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:13 compute-0 python3.9[52958]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100871.8881297-338-178127313254004/.source.cfg _original_basename=.vu8_6bu4 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:13 compute-0 sudo[52956]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:13 compute-0 sudo[53108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjuflfrezggidkenbkhbquvhrrwdzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100873.3962634-353-132722794319057/AnsiballZ_systemd.py'
Nov 25 20:01:13 compute-0 sudo[53108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:14 compute-0 python3.9[53110]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:01:14 compute-0 systemd[1]: Reloading Network Manager...
Nov 25 20:01:14 compute-0 NetworkManager[49051]: <info>  [1764100874.1709] audit: op="reload" arg="0" pid=53114 uid=0 result="success"
Nov 25 20:01:14 compute-0 NetworkManager[49051]: <info>  [1764100874.1719] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 20:01:14 compute-0 systemd[1]: Reloaded Network Manager.
Nov 25 20:01:14 compute-0 sudo[53108]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:14 compute-0 sshd-session[45048]: Connection closed by 192.168.122.30 port 38724
Nov 25 20:01:14 compute-0 sshd-session[45045]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:01:14 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 20:01:14 compute-0 systemd[1]: session-10.scope: Consumed 55.013s CPU time.
Nov 25 20:01:14 compute-0 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Nov 25 20:01:14 compute-0 systemd-logind[789]: Removed session 10.
Nov 25 20:01:19 compute-0 sshd-session[53145]: Accepted publickey for zuul from 192.168.122.30 port 53204 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:01:19 compute-0 systemd-logind[789]: New session 11 of user zuul.
Nov 25 20:01:19 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 25 20:01:19 compute-0 sshd-session[53145]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:01:20 compute-0 python3.9[53298]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:01:21 compute-0 python3.9[53452]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:01:23 compute-0 python3.9[53646]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:23 compute-0 sshd-session[53148]: Connection closed by 192.168.122.30 port 53204
Nov 25 20:01:23 compute-0 sshd-session[53145]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:01:23 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 20:01:23 compute-0 systemd[1]: session-11.scope: Consumed 2.839s CPU time.
Nov 25 20:01:23 compute-0 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Nov 25 20:01:23 compute-0 systemd-logind[789]: Removed session 11.
Nov 25 20:01:24 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 20:01:28 compute-0 sshd-session[53675]: Accepted publickey for zuul from 192.168.122.30 port 38828 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:01:28 compute-0 systemd-logind[789]: New session 12 of user zuul.
Nov 25 20:01:28 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 25 20:01:28 compute-0 sshd-session[53675]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:01:30 compute-0 python3.9[53828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:01:31 compute-0 python3.9[53982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:01:32 compute-0 sudo[54137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtforbyqzogcldaabpccsbldrqnfcac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100891.6709597-40-152483280302320/AnsiballZ_setup.py'
Nov 25 20:01:32 compute-0 sudo[54137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:32 compute-0 python3.9[54139]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:01:32 compute-0 sudo[54137]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:33 compute-0 sudo[54221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aasdksufczuuxkiwesdpjvorwmwdrpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100891.6709597-40-152483280302320/AnsiballZ_dnf.py'
Nov 25 20:01:33 compute-0 sudo[54221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:33 compute-0 python3.9[54223]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:01:34 compute-0 sudo[54221]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:35 compute-0 sudo[54375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwqtaueszyqioqkdijharymdpxpbwyta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100894.8390722-52-25610350468450/AnsiballZ_setup.py'
Nov 25 20:01:35 compute-0 sudo[54375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:35 compute-0 python3.9[54377]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:01:35 compute-0 sudo[54375]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:36 compute-0 sudo[54570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaizmxdirqjmmruocdxvqvihgqfyfpfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100896.08938-63-59839012763507/AnsiballZ_file.py'
Nov 25 20:01:36 compute-0 sudo[54570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:36 compute-0 python3.9[54572]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:36 compute-0 sudo[54570]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:37 compute-0 sudo[54722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czebogjnbywwztepmrqdqfwtzmeidgqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100897.119008-71-33903102360929/AnsiballZ_command.py'
Nov 25 20:01:37 compute-0 sudo[54722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:37 compute-0 python3.9[54724]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat879341811-merged.mount: Deactivated successfully.
Nov 25 20:01:37 compute-0 podman[54725]: 2025-11-25 20:01:37.9806755 +0000 UTC m=+0.071127655 system refresh
Nov 25 20:01:38 compute-0 sudo[54722]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:38 compute-0 sudo[54885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-actzcmsueeoumxguuqzuareduzkebrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100898.235002-79-103922409421199/AnsiballZ_stat.py'
Nov 25 20:01:38 compute-0 sudo[54885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:01:39 compute-0 python3.9[54887]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:39 compute-0 sudo[54885]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:39 compute-0 sudo[55008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqclsbjxhrnpxcxdqfjbvnecfbflbqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100898.235002-79-103922409421199/AnsiballZ_copy.py'
Nov 25 20:01:39 compute-0 sudo[55008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:39 compute-0 python3.9[55010]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100898.235002-79-103922409421199/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a926240cc978ac65479df6ba848deb79449b1013 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:40 compute-0 sudo[55008]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:40 compute-0 sudo[55160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cozsqnybdifitfemmkwqdilgldvxfbjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100900.203094-94-6062205530744/AnsiballZ_stat.py'
Nov 25 20:01:40 compute-0 sudo[55160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:40 compute-0 python3.9[55162]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:40 compute-0 sudo[55160]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:41 compute-0 sudo[55283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoyocymzdlonyvaxzyllyikrlstlldvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100900.203094-94-6062205530744/AnsiballZ_copy.py'
Nov 25 20:01:41 compute-0 sudo[55283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:41 compute-0 python3.9[55285]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764100900.203094-94-6062205530744/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c39bcead5e8c590f5dad5226baddf0740c819914 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:41 compute-0 sudo[55283]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:42 compute-0 sudo[55435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlxteooamotrqhrhuieojqzxcvbhmmwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100901.8217847-110-25111966231709/AnsiballZ_ini_file.py'
Nov 25 20:01:42 compute-0 sudo[55435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:42 compute-0 python3.9[55437]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:42 compute-0 sudo[55435]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:43 compute-0 sudo[55587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbxjqlgmectrpxvxtbrpdqgylqbofbsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100902.7974033-110-84268076194668/AnsiballZ_ini_file.py'
Nov 25 20:01:43 compute-0 sudo[55587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:43 compute-0 python3.9[55589]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:43 compute-0 sudo[55587]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:43 compute-0 sudo[55739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eootqqpmqrentavmfqseyqstfgnbqval ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100903.587786-110-74987120870659/AnsiballZ_ini_file.py'
Nov 25 20:01:43 compute-0 sudo[55739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:44 compute-0 python3.9[55741]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:44 compute-0 sudo[55739]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:44 compute-0 sudo[55891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeiptwonphyrjypfkczvfyzouaohomwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100904.420533-110-112617561841417/AnsiballZ_ini_file.py'
Nov 25 20:01:44 compute-0 sudo[55891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:45 compute-0 python3.9[55893]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:45 compute-0 sudo[55891]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:45 compute-0 sudo[56043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxelyjxodnqsywajcexoqilymgihookh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100905.3672292-141-271421740092555/AnsiballZ_dnf.py'
Nov 25 20:01:45 compute-0 sudo[56043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:46 compute-0 python3.9[56045]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:01:47 compute-0 sudo[56043]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:48 compute-0 sudo[56196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkxulrdzxzdrpdljhsttsnowgoiukjxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100907.6694276-152-249839459931930/AnsiballZ_setup.py'
Nov 25 20:01:48 compute-0 sudo[56196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:48 compute-0 python3.9[56198]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:01:48 compute-0 sudo[56196]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:49 compute-0 sudo[56350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iismprnchyswyielnouiazvswvgpeqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100908.6523662-160-37596182280434/AnsiballZ_stat.py'
Nov 25 20:01:49 compute-0 sudo[56350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:49 compute-0 python3.9[56352]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:01:49 compute-0 sudo[56350]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:49 compute-0 sudo[56502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkkjxvhzzedbhuiqxeeuahkugkynpkfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100909.536015-169-130366764570200/AnsiballZ_stat.py'
Nov 25 20:01:49 compute-0 sudo[56502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:50 compute-0 python3.9[56504]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:01:50 compute-0 sudo[56502]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:50 compute-0 sudo[56654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqypevfhubjemkzaklyvpuubvtshxxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100910.5113964-179-157127528037136/AnsiballZ_command.py'
Nov 25 20:01:50 compute-0 sudo[56654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:51 compute-0 python3.9[56656]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:51 compute-0 sudo[56654]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:52 compute-0 sudo[56807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btfsmofdhmikwqqsqshkzmaqvgtcdvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100911.4754324-189-46204180309287/AnsiballZ_service_facts.py'
Nov 25 20:01:52 compute-0 sudo[56807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:52 compute-0 python3.9[56809]: ansible-service_facts Invoked
Nov 25 20:01:52 compute-0 network[56826]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:01:52 compute-0 network[56827]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:01:52 compute-0 network[56828]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:01:56 compute-0 sudo[56807]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:58 compute-0 sudo[57111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rikvahsuusluonzzoxsewjrlargsszzx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764100917.6792612-204-171595103457548/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764100917.6792612-204-171595103457548/args'
Nov 25 20:01:58 compute-0 sudo[57111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:58 compute-0 sudo[57111]: pam_unix(sudo:session): session closed for user root
Nov 25 20:01:59 compute-0 sudo[57278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slitmsswmssyagdxwahnirbsxfpnagpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100918.6495302-215-3354627594957/AnsiballZ_dnf.py'
Nov 25 20:01:59 compute-0 sudo[57278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:01:59 compute-0 python3.9[57280]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:02:00 compute-0 sudo[57278]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:01 compute-0 sudo[57431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tliwpjaejldibkuvpitkgwrpwcrbgwlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100920.964042-228-212020049635562/AnsiballZ_package_facts.py'
Nov 25 20:02:01 compute-0 sudo[57431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:02 compute-0 python3.9[57433]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 20:02:02 compute-0 sudo[57431]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:03 compute-0 sudo[57583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdjszkajtjibssmjgwkppdermfbbrpsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100922.777829-238-236629061954803/AnsiballZ_stat.py'
Nov 25 20:02:03 compute-0 sudo[57583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:03 compute-0 python3.9[57585]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:03 compute-0 sudo[57583]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:03 compute-0 sudo[57708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmevacwixqyubpvkffpttrvaesesbiee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100922.777829-238-236629061954803/AnsiballZ_copy.py'
Nov 25 20:02:03 compute-0 sudo[57708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:04 compute-0 python3.9[57710]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100922.777829-238-236629061954803/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:04 compute-0 sudo[57708]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:04 compute-0 sudo[57862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyhhnlavcfcxccafukkmjoaqrbgawwzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100924.504486-253-214661851432646/AnsiballZ_stat.py'
Nov 25 20:02:04 compute-0 sudo[57862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:05 compute-0 python3.9[57864]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:05 compute-0 sudo[57862]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:05 compute-0 sudo[57987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffprpvfnbpmiodykfrhvcsnnceieyctg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100924.504486-253-214661851432646/AnsiballZ_copy.py'
Nov 25 20:02:05 compute-0 sudo[57987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:05 compute-0 python3.9[57989]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100924.504486-253-214661851432646/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:05 compute-0 sudo[57987]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:07 compute-0 sudo[58141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqqtmtnmxvkriogdfqkhqepojzqmtft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100926.5476894-274-146174062135833/AnsiballZ_lineinfile.py'
Nov 25 20:02:07 compute-0 sudo[58141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:07 compute-0 python3.9[58143]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:07 compute-0 sudo[58141]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:08 compute-0 sudo[58295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpolrhqnttosfgeaxzctnilwztagzlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100927.8730335-289-89149563483786/AnsiballZ_setup.py'
Nov 25 20:02:08 compute-0 sudo[58295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:08 compute-0 python3.9[58297]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:02:08 compute-0 sudo[58295]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:09 compute-0 sudo[58379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ottfnmrtbezqzphgcxrkwfzjigafkzio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100927.8730335-289-89149563483786/AnsiballZ_systemd.py'
Nov 25 20:02:09 compute-0 sudo[58379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:09 compute-0 python3.9[58381]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:02:09 compute-0 sudo[58379]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:10 compute-0 sudo[58533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpatomcayxujspwikgpjcmhzkigcgyik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100930.3765092-305-109948183695456/AnsiballZ_setup.py'
Nov 25 20:02:10 compute-0 sudo[58533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:11 compute-0 python3.9[58535]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:02:11 compute-0 sudo[58533]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:11 compute-0 sudo[58617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbjfnuhblbyifitluffymozwdejuknlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100930.3765092-305-109948183695456/AnsiballZ_systemd.py'
Nov 25 20:02:11 compute-0 sudo[58617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:11 compute-0 python3.9[58619]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:02:11 compute-0 chronyd[799]: chronyd exiting
Nov 25 20:02:11 compute-0 systemd[1]: Stopping NTP client/server...
Nov 25 20:02:11 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 20:02:11 compute-0 systemd[1]: Stopped NTP client/server.
Nov 25 20:02:11 compute-0 systemd[1]: Starting NTP client/server...
Nov 25 20:02:12 compute-0 chronyd[58628]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 20:02:12 compute-0 chronyd[58628]: Frequency -26.849 +/- 0.983 ppm read from /var/lib/chrony/drift
Nov 25 20:02:12 compute-0 chronyd[58628]: Loaded seccomp filter (level 2)
Nov 25 20:02:12 compute-0 systemd[1]: Started NTP client/server.
Nov 25 20:02:12 compute-0 sudo[58617]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:12 compute-0 sshd-session[53678]: Connection closed by 192.168.122.30 port 38828
Nov 25 20:02:12 compute-0 sshd-session[53675]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:02:12 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 20:02:12 compute-0 systemd[1]: session-12.scope: Consumed 30.660s CPU time.
Nov 25 20:02:12 compute-0 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Nov 25 20:02:12 compute-0 systemd-logind[789]: Removed session 12.
Nov 25 20:02:18 compute-0 sshd-session[58654]: Accepted publickey for zuul from 192.168.122.30 port 33346 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:02:18 compute-0 systemd-logind[789]: New session 13 of user zuul.
Nov 25 20:02:18 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 25 20:02:18 compute-0 sshd-session[58654]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:02:18 compute-0 sudo[58807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnchqhvsajskzdygtqlwhwcfgllcpogh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100938.3480222-22-180801332581497/AnsiballZ_file.py'
Nov 25 20:02:18 compute-0 sudo[58807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:19 compute-0 python3.9[58809]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:19 compute-0 sudo[58807]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:20 compute-0 sudo[58959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvuypkwyxokppdunyzqhbeeenayrjwnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100939.4101224-34-96446577338342/AnsiballZ_stat.py'
Nov 25 20:02:20 compute-0 sudo[58959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:20 compute-0 python3.9[58961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:20 compute-0 sudo[58959]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:20 compute-0 sudo[59082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdodnibtaiprxtkjhqmidaoqanhrksrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100939.4101224-34-96446577338342/AnsiballZ_copy.py'
Nov 25 20:02:20 compute-0 sudo[59082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:21 compute-0 python3.9[59084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100939.4101224-34-96446577338342/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:21 compute-0 sudo[59082]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:21 compute-0 sshd-session[58657]: Connection closed by 192.168.122.30 port 33346
Nov 25 20:02:21 compute-0 sshd-session[58654]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:02:21 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 20:02:21 compute-0 systemd[1]: session-13.scope: Consumed 1.924s CPU time.
Nov 25 20:02:21 compute-0 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Nov 25 20:02:21 compute-0 systemd-logind[789]: Removed session 13.
Nov 25 20:02:26 compute-0 sshd-session[59109]: Accepted publickey for zuul from 192.168.122.30 port 55838 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:02:26 compute-0 systemd-logind[789]: New session 14 of user zuul.
Nov 25 20:02:26 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 25 20:02:26 compute-0 sshd-session[59109]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:02:27 compute-0 python3.9[59262]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:02:29 compute-0 sudo[59416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phayqwibnmuhkfrgrlmmizrhexspppnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100948.4291797-33-173378402824006/AnsiballZ_file.py'
Nov 25 20:02:29 compute-0 sudo[59416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:29 compute-0 python3.9[59418]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:29 compute-0 sudo[59416]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:30 compute-0 sudo[59591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlpnwklwgdcroozqaqvffvjbetiurvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100949.4801695-41-92345606511831/AnsiballZ_stat.py'
Nov 25 20:02:30 compute-0 sudo[59591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:30 compute-0 python3.9[59593]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:30 compute-0 sudo[59591]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:31 compute-0 sudo[59714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vihrynzclzwqxzqsfputhecngrexvffe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100949.4801695-41-92345606511831/AnsiballZ_copy.py'
Nov 25 20:02:31 compute-0 sudo[59714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:31 compute-0 python3.9[59716]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764100949.4801695-41-92345606511831/.source.json _original_basename=.f0vls9n7 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:31 compute-0 sudo[59714]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:32 compute-0 sudo[59866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcrbnztsebpzwomyamjgcwitdgzpltxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100951.8471775-64-23618951260130/AnsiballZ_stat.py'
Nov 25 20:02:32 compute-0 sudo[59866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:32 compute-0 python3.9[59868]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:32 compute-0 sudo[59866]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:32 compute-0 sudo[59989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keojbestelskrazkruutfebznfitupvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100951.8471775-64-23618951260130/AnsiballZ_copy.py'
Nov 25 20:02:32 compute-0 sudo[59989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:33 compute-0 python3.9[59991]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100951.8471775-64-23618951260130/.source _original_basename=.tux576rt follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:33 compute-0 sudo[59989]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:33 compute-0 sudo[60141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbajwcvehavbldiliwxvbifhqeytxekt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100953.3154447-80-192267788064651/AnsiballZ_file.py'
Nov 25 20:02:33 compute-0 sudo[60141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:33 compute-0 python3.9[60143]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:02:33 compute-0 sudo[60141]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:34 compute-0 sudo[60293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vongxieoblkrhqxygfbxlfefgaoizcum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100954.077009-88-10158820805511/AnsiballZ_stat.py'
Nov 25 20:02:34 compute-0 sudo[60293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:34 compute-0 python3.9[60295]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:34 compute-0 sudo[60293]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:35 compute-0 sudo[60416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eufakzypyfozvurcovhiisilnolarsid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100954.077009-88-10158820805511/AnsiballZ_copy.py'
Nov 25 20:02:35 compute-0 sudo[60416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:35 compute-0 python3.9[60418]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764100954.077009-88-10158820805511/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:02:35 compute-0 sudo[60416]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:35 compute-0 sudo[60568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgnoozoauqkxlsdsfysivbmzuebmoukx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100955.587262-88-167586335014665/AnsiballZ_stat.py'
Nov 25 20:02:35 compute-0 sudo[60568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:36 compute-0 python3.9[60570]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:36 compute-0 sudo[60568]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:36 compute-0 sudo[60691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iymgpdahgbryoxkbkhkdyjcrqrmrfcmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100955.587262-88-167586335014665/AnsiballZ_copy.py'
Nov 25 20:02:36 compute-0 sudo[60691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:36 compute-0 python3.9[60693]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764100955.587262-88-167586335014665/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:02:36 compute-0 sudo[60691]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:37 compute-0 sudo[60843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqiynvhazwhmraqtynezulanwnzxgeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100957.1439965-117-200109223146574/AnsiballZ_file.py'
Nov 25 20:02:37 compute-0 sudo[60843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:37 compute-0 python3.9[60845]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:37 compute-0 sudo[60843]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:38 compute-0 sudo[60995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcoimogiffalaarggrydwldoqpkzqifo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100957.966226-125-158396222610531/AnsiballZ_stat.py'
Nov 25 20:02:38 compute-0 sudo[60995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:38 compute-0 python3.9[60997]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:38 compute-0 sudo[60995]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:38 compute-0 sudo[61118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqsanchieezjwwxzohzvnonmjymnbzkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100957.966226-125-158396222610531/AnsiballZ_copy.py'
Nov 25 20:02:38 compute-0 sudo[61118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:39 compute-0 python3.9[61120]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100957.966226-125-158396222610531/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:39 compute-0 sudo[61118]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:39 compute-0 sudo[61270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmnzeakxzsrtpmqojvztzanpkggjfjsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100959.3763344-140-232300392711009/AnsiballZ_stat.py'
Nov 25 20:02:39 compute-0 sudo[61270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:39 compute-0 python3.9[61272]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:39 compute-0 sudo[61270]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:40 compute-0 sudo[61393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycpbfxwfnlihjkqcefizlpdkljgxcopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100959.3763344-140-232300392711009/AnsiballZ_copy.py'
Nov 25 20:02:40 compute-0 sudo[61393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:40 compute-0 python3.9[61395]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100959.3763344-140-232300392711009/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:40 compute-0 sudo[61393]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:41 compute-0 sudo[61545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owyzdtxrnffftglvhhltggjkmrctlwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100960.8108678-155-277900968727172/AnsiballZ_systemd.py'
Nov 25 20:02:41 compute-0 sudo[61545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:41 compute-0 python3.9[61547]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:02:41 compute-0 systemd[1]: Reloading.
Nov 25 20:02:41 compute-0 systemd-rc-local-generator[61575]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:41 compute-0 systemd-sysv-generator[61578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:41 compute-0 systemd[1]: Reloading.
Nov 25 20:02:42 compute-0 systemd-rc-local-generator[61614]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:42 compute-0 systemd-sysv-generator[61618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:42 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 20:02:42 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 20:02:42 compute-0 sudo[61545]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:42 compute-0 sudo[61774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzfybaekxztrbdysftoixnjzufckvpko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100962.4565306-163-249430309370998/AnsiballZ_stat.py'
Nov 25 20:02:42 compute-0 sudo[61774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:43 compute-0 python3.9[61776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:43 compute-0 sudo[61774]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:43 compute-0 sudo[61897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secguwzczzineqbohgpipyvkoqiwqymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100962.4565306-163-249430309370998/AnsiballZ_copy.py'
Nov 25 20:02:43 compute-0 sudo[61897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:43 compute-0 python3.9[61899]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100962.4565306-163-249430309370998/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:43 compute-0 sudo[61897]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:44 compute-0 sudo[62049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwtzzqaipaczhejllxfszagbrzlzfdhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100963.903018-178-103239320758033/AnsiballZ_stat.py'
Nov 25 20:02:44 compute-0 sudo[62049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:44 compute-0 python3.9[62051]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:44 compute-0 sudo[62049]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:44 compute-0 sudo[62172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzozbwadetvrowfapxulacxavlgvklyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100963.903018-178-103239320758033/AnsiballZ_copy.py'
Nov 25 20:02:44 compute-0 sudo[62172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:45 compute-0 python3.9[62174]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100963.903018-178-103239320758033/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:45 compute-0 sudo[62172]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:45 compute-0 sudo[62324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riiuwzjcyglvggwnkssrfteoypzomkup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100965.3693957-193-106428239581679/AnsiballZ_systemd.py'
Nov 25 20:02:45 compute-0 sudo[62324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:46 compute-0 python3.9[62326]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:02:46 compute-0 systemd[1]: Reloading.
Nov 25 20:02:46 compute-0 systemd-rc-local-generator[62355]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:46 compute-0 systemd-sysv-generator[62359]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:46 compute-0 systemd[1]: Reloading.
Nov 25 20:02:46 compute-0 systemd-sysv-generator[62395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:46 compute-0 systemd-rc-local-generator[62391]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:46 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 20:02:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 20:02:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 20:02:46 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 20:02:46 compute-0 sudo[62324]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:47 compute-0 python3.9[62553]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:02:47 compute-0 network[62570]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:02:47 compute-0 network[62571]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:02:47 compute-0 network[62572]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:02:52 compute-0 sudo[62832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxhkxsrkdcczwwgoiejdzahvgwexutd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100972.1365106-209-273965405179657/AnsiballZ_systemd.py'
Nov 25 20:02:52 compute-0 sudo[62832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:52 compute-0 python3.9[62834]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:02:52 compute-0 systemd[1]: Reloading.
Nov 25 20:02:52 compute-0 systemd-sysv-generator[62865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:52 compute-0 systemd-rc-local-generator[62862]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:53 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 20:02:53 compute-0 iptables.init[62873]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 20:02:53 compute-0 iptables.init[62873]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 20:02:53 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 20:02:53 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 20:02:53 compute-0 sudo[62832]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:54 compute-0 sudo[63067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bztovbtiglxqkiowuerhikxzhfcxvtwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100973.7644956-209-97682317983330/AnsiballZ_systemd.py'
Nov 25 20:02:54 compute-0 sudo[63067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:54 compute-0 python3.9[63069]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:02:54 compute-0 sudo[63067]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:55 compute-0 sudo[63221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jooqdvbstqmduvrkyjpcyvviwffuvcnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100974.884531-225-249744594704742/AnsiballZ_systemd.py'
Nov 25 20:02:55 compute-0 sudo[63221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:55 compute-0 python3.9[63223]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:02:55 compute-0 systemd[1]: Reloading.
Nov 25 20:02:55 compute-0 systemd-sysv-generator[63254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:55 compute-0 systemd-rc-local-generator[63250]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:55 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 25 20:02:55 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 25 20:02:56 compute-0 sudo[63221]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:56 compute-0 sudo[63413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzjfqbfugvkzoqzpjjxpobktifhqmygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100976.2814322-233-11108180944260/AnsiballZ_command.py'
Nov 25 20:02:56 compute-0 sudo[63413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:57 compute-0 python3.9[63415]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:57 compute-0 sudo[63413]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:57 compute-0 sudo[63566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozqkronpojkbxwkqtegfxpqynwazemxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100977.533088-247-206900530237052/AnsiballZ_stat.py'
Nov 25 20:02:57 compute-0 sudo[63566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:58 compute-0 python3.9[63568]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:02:58 compute-0 sudo[63566]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:58 compute-0 sudo[63691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvuovtlrigutuelfayhbfldgomxysmvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100977.533088-247-206900530237052/AnsiballZ_copy.py'
Nov 25 20:02:58 compute-0 sudo[63691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:58 compute-0 python3.9[63693]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100977.533088-247-206900530237052/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:58 compute-0 sudo[63691]: pam_unix(sudo:session): session closed for user root
Nov 25 20:02:59 compute-0 sudo[63844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txhjebfudmdogxgpilektqtxxymcfuqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100979.059125-262-275307002458952/AnsiballZ_systemd.py'
Nov 25 20:02:59 compute-0 sudo[63844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:02:59 compute-0 python3.9[63846]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:02:59 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 20:02:59 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 20:02:59 compute-0 sshd[1007]: Received SIGHUP; restarting.
Nov 25 20:02:59 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Nov 25 20:02:59 compute-0 sshd[1007]: Server listening on :: port 22.
Nov 25 20:02:59 compute-0 sudo[63844]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:00 compute-0 sudo[64000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moifigorwswnfquypgbhncoyqvdbaqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100980.092593-270-45781568675160/AnsiballZ_file.py'
Nov 25 20:03:00 compute-0 sudo[64000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:00 compute-0 python3.9[64002]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:00 compute-0 sudo[64000]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:01 compute-0 sudo[64152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexujdwlwfcovoceimfhvtpmybwezjgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100980.8984704-278-59240797824001/AnsiballZ_stat.py'
Nov 25 20:03:01 compute-0 sudo[64152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:01 compute-0 python3.9[64154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:01 compute-0 sudo[64152]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:01 compute-0 sudo[64275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joqtuegbrvyanxuqyfhcdvhfxcwxetht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100980.8984704-278-59240797824001/AnsiballZ_copy.py'
Nov 25 20:03:01 compute-0 sudo[64275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:02 compute-0 python3.9[64277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100980.8984704-278-59240797824001/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:02 compute-0 sudo[64275]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:03 compute-0 sudo[64427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robctxyeiuecythmliushzfudbydxpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100982.5318441-296-231838491005296/AnsiballZ_timezone.py'
Nov 25 20:03:03 compute-0 sudo[64427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:03 compute-0 python3.9[64429]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 20:03:03 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 20:03:03 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 20:03:03 compute-0 sudo[64427]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:04 compute-0 sudo[64583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahikpmtfkgqhfylifcbgfiggntaoxllx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100983.7126105-305-230921309871702/AnsiballZ_file.py'
Nov 25 20:03:04 compute-0 sudo[64583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:04 compute-0 python3.9[64585]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:04 compute-0 sudo[64583]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:04 compute-0 sudo[64735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otrgtmfddmkjvbviklvbjzoqparmdkfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100984.590503-313-243066301132970/AnsiballZ_stat.py'
Nov 25 20:03:04 compute-0 sudo[64735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:05 compute-0 python3.9[64737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:05 compute-0 sudo[64735]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:05 compute-0 sudo[64858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zejmtxotcqzmorydebutmcxgayrewmal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100984.590503-313-243066301132970/AnsiballZ_copy.py'
Nov 25 20:03:05 compute-0 sudo[64858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:05 compute-0 python3.9[64860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100984.590503-313-243066301132970/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:05 compute-0 sudo[64858]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:06 compute-0 sudo[65010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-merqygtwepugtkwaaxhrdoclqglgoyfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100985.9892848-328-75889618822085/AnsiballZ_stat.py'
Nov 25 20:03:06 compute-0 sudo[65010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:06 compute-0 python3.9[65012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:06 compute-0 sudo[65010]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:07 compute-0 sudo[65133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqzvvgwvqsthkiyjqimcgtjnltuigkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100985.9892848-328-75889618822085/AnsiballZ_copy.py'
Nov 25 20:03:07 compute-0 sudo[65133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:07 compute-0 python3.9[65135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764100985.9892848-328-75889618822085/.source.yaml _original_basename=.a57twmeg follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:07 compute-0 sudo[65133]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:07 compute-0 sshd-session[65212]: Connection closed by 176.32.195.85 port 42456
Nov 25 20:03:07 compute-0 sudo[65286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atdwvoruilzlevvpnlhzttsurvpvyxpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100987.4875824-343-11698930352639/AnsiballZ_stat.py'
Nov 25 20:03:07 compute-0 sudo[65286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:08 compute-0 sshd-session[65288]: Unable to negotiate with 176.32.195.85 port 42464: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Nov 25 20:03:08 compute-0 python3.9[65289]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:08 compute-0 sudo[65286]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:08 compute-0 sudo[65411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijiyvdbqbznnydxacbllngposyqvhoou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100987.4875824-343-11698930352639/AnsiballZ_copy.py'
Nov 25 20:03:08 compute-0 sudo[65411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:08 compute-0 python3.9[65413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100987.4875824-343-11698930352639/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:08 compute-0 sudo[65411]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:09 compute-0 sudo[65563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopdamaexuoamgapprjnentonxbqunny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100988.9828947-358-260067077848095/AnsiballZ_command.py'
Nov 25 20:03:09 compute-0 sudo[65563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:09 compute-0 python3.9[65565]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:03:09 compute-0 sudo[65563]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:10 compute-0 sudo[65716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcsxrasfczxmbqhkylnzebbctadgtvwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100989.7892802-366-133597668389384/AnsiballZ_command.py'
Nov 25 20:03:10 compute-0 sudo[65716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:10 compute-0 python3.9[65718]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:03:10 compute-0 sudo[65716]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:11 compute-0 sudo[65869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfxnvxxgijrkonrcbjoovyyjznevogxi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764100990.5787435-374-23001556345339/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 20:03:11 compute-0 sudo[65869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:11 compute-0 python3[65871]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:03:11 compute-0 sudo[65869]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:12 compute-0 sudo[66021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gifgshtgbarzkutaqduoenmvceluhmoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100991.6052485-382-259479630477489/AnsiballZ_stat.py'
Nov 25 20:03:12 compute-0 sudo[66021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:12 compute-0 python3.9[66023]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:12 compute-0 sudo[66021]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:12 compute-0 sudo[66144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckuxabfaxnenvdgpyfmoxvnqhcnwuec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100991.6052485-382-259479630477489/AnsiballZ_copy.py'
Nov 25 20:03:12 compute-0 sudo[66144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:12 compute-0 python3.9[66146]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100991.6052485-382-259479630477489/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:12 compute-0 sudo[66144]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:13 compute-0 sudo[66296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxuduewqsypkrfbgfvpvdrcvdxwqdbxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100993.1860702-397-169755834618280/AnsiballZ_stat.py'
Nov 25 20:03:13 compute-0 sudo[66296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:13 compute-0 python3.9[66298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:13 compute-0 sudo[66296]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:14 compute-0 sudo[66419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vswytxqanapkxstxllcgpafpysacajwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100993.1860702-397-169755834618280/AnsiballZ_copy.py'
Nov 25 20:03:14 compute-0 sudo[66419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:14 compute-0 python3.9[66421]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100993.1860702-397-169755834618280/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:14 compute-0 sudo[66419]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:15 compute-0 sudo[66571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpqoksduitdvzmzpszxgvpdzaijaupqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100994.776687-412-237251407824559/AnsiballZ_stat.py'
Nov 25 20:03:15 compute-0 sudo[66571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:15 compute-0 python3.9[66573]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:15 compute-0 sudo[66571]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:15 compute-0 sudo[66694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fentvqerrwgxmltlqmyvgpazkcgenwus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100994.776687-412-237251407824559/AnsiballZ_copy.py'
Nov 25 20:03:15 compute-0 sudo[66694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:16 compute-0 python3.9[66696]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100994.776687-412-237251407824559/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:16 compute-0 sudo[66694]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:16 compute-0 sudo[66846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvsapcjxgytnevjefegzvsimvschgwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100996.319493-427-34545240404383/AnsiballZ_stat.py'
Nov 25 20:03:16 compute-0 sudo[66846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:16 compute-0 python3.9[66848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:16 compute-0 sudo[66846]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:17 compute-0 sudo[66970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elqyqlhwkjqmhbrjcewrtngwbawvkrwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100996.319493-427-34545240404383/AnsiballZ_copy.py'
Nov 25 20:03:17 compute-0 sudo[66970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:17 compute-0 python3.9[66972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100996.319493-427-34545240404383/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:17 compute-0 sudo[66970]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:18 compute-0 sudo[67122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxexkqdhfwvdnylooxyhklayclnifoxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100997.8831217-442-40473690762753/AnsiballZ_stat.py'
Nov 25 20:03:18 compute-0 sudo[67122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:18 compute-0 python3.9[67124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:03:18 compute-0 sudo[67122]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:19 compute-0 sudo[67245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcrudtvgobvglvojmlhsljsyiwwnqcdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100997.8831217-442-40473690762753/AnsiballZ_copy.py'
Nov 25 20:03:19 compute-0 sudo[67245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:19 compute-0 python3.9[67247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764100997.8831217-442-40473690762753/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:19 compute-0 sudo[67245]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:19 compute-0 sudo[67397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngsrzthvvcghjjitaeasiftakylwgdkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764100999.4808986-457-269568557514679/AnsiballZ_file.py'
Nov 25 20:03:19 compute-0 sudo[67397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:20 compute-0 python3.9[67399]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:20 compute-0 sudo[67397]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:20 compute-0 sudo[67549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptxhgixubyalivdrinonuatppcajpomw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101000.2310703-465-267509588112990/AnsiballZ_command.py'
Nov 25 20:03:20 compute-0 sudo[67549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:20 compute-0 python3.9[67551]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:03:20 compute-0 sudo[67549]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:21 compute-0 sudo[67708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyaviegtzgijallukrhepjtdaigviueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101001.1873538-473-178054368977225/AnsiballZ_blockinfile.py'
Nov 25 20:03:21 compute-0 sudo[67708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:21 compute-0 python3.9[67710]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:21 compute-0 sudo[67708]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:22 compute-0 sudo[67861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsrtdfwwsqdrwpwhamftibhiulfjtlbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101002.1743329-482-105924540088682/AnsiballZ_file.py'
Nov 25 20:03:22 compute-0 sudo[67861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:22 compute-0 python3.9[67863]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:22 compute-0 sudo[67861]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:23 compute-0 sudo[68013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auxqhtvqsohensdvgumxtehansqhdxas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101002.843364-482-144800581129467/AnsiballZ_file.py'
Nov 25 20:03:23 compute-0 sudo[68013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:23 compute-0 python3.9[68015]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:23 compute-0 sudo[68013]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:24 compute-0 sudo[68165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diydhzdnxamukjmezhreeeqngutguwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101003.6108062-497-259631195229192/AnsiballZ_mount.py'
Nov 25 20:03:24 compute-0 sudo[68165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:24 compute-0 python3.9[68167]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 20:03:24 compute-0 sudo[68165]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:25 compute-0 sudo[68318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cltnttfoqmayvbsubhhoepbfbzshdznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101004.630066-497-247560181912751/AnsiballZ_mount.py'
Nov 25 20:03:25 compute-0 sudo[68318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:25 compute-0 python3.9[68320]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 20:03:25 compute-0 sudo[68318]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:25 compute-0 sshd-session[59112]: Connection closed by 192.168.122.30 port 55838
Nov 25 20:03:25 compute-0 sshd-session[59109]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:03:25 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 20:03:25 compute-0 systemd[1]: session-14.scope: Consumed 43.064s CPU time.
Nov 25 20:03:25 compute-0 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Nov 25 20:03:25 compute-0 systemd-logind[789]: Removed session 14.
Nov 25 20:03:31 compute-0 sshd-session[68347]: Accepted publickey for zuul from 192.168.122.30 port 43498 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:03:31 compute-0 systemd-logind[789]: New session 15 of user zuul.
Nov 25 20:03:31 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 25 20:03:31 compute-0 sshd-session[68347]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:03:31 compute-0 sudo[68500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzqddbefmizaisrcgqczoajbdduqaos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101011.2308326-16-159345854336628/AnsiballZ_tempfile.py'
Nov 25 20:03:31 compute-0 sudo[68500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:32 compute-0 python3.9[68502]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 20:03:32 compute-0 sudo[68500]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:32 compute-0 sudo[68652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtivaymnsapnwddiiaijxcuofsdhxwrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101012.2713962-28-127757067670542/AnsiballZ_stat.py'
Nov 25 20:03:32 compute-0 sudo[68652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:32 compute-0 python3.9[68654]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:03:32 compute-0 sudo[68652]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:33 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 20:03:33 compute-0 sudo[68807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtvuzqynmcfwclgzmyzispgamvxcooja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101013.2423372-38-21846032805096/AnsiballZ_setup.py'
Nov 25 20:03:33 compute-0 sudo[68807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:34 compute-0 python3.9[68809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:03:34 compute-0 sudo[68807]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:35 compute-0 sudo[68959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbjchannbipcratsoltfeaximwyfjvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101014.5816748-47-237100598707221/AnsiballZ_blockinfile.py'
Nov 25 20:03:35 compute-0 sudo[68959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:35 compute-0 python3.9[68961]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6AbHDFm6oiXOksIFhVfW+rKLsG5lsUMZ4h0LK/vi3baEm3lDOSCFiRaledzvCGw8pcbo5B5Ui9LwA6ZrurFA4EvBdRcNX2MYx8E7VQUBz19Cv5ssHGiokeLg/X8NRxvhizSNqEqTIXOBW/sjl2ML6B7c9Ho/On/2VOOogZqw39bPr58N1jZc8GGzZllxOMAGKQTrmbhrf2DDBl/eIvCnBeBarDQEuCXz7WY4Yg/5ExbD2MD4pVSgsmZKlZ3hZ/bGga19lvUoww5cRWp5mc1jmIEYS2Ns9Tam3tLAbA+4X02wq1hDbtpAOiV05naOPZcQ6NH8nyRFalVZ5JR9jJX31VllVhUB0J00We3tPSsAVeRWruGGvVcIZLpscmH3qIBb4ZpdiXwEBglE9K88PvEF5Q+ityKfnZBFAWx3pRzuVBMUZ+kKSL0KzJjdIcejX5wpTr9daIswPMC8qv8Bl3/6FNuXz9RqyUpIR5ujMgh8pQYJRGTx4LQoeVD95PGgEmW8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOnaumPCLJWozHeEwnBl9HIrTuoxcpbqSdFvByOBKVNO
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMt5rXHYYnmFaVy9amIUR4NjKK7m0LWd/U991zYz1D08AUE+ySzn4CMebmlNzvQuZCF/tJA3h93sOksMfGwh5Ds=
                                             create=True mode=0644 path=/tmp/ansible.ydysjqbg state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:35 compute-0 sudo[68959]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:36 compute-0 sudo[69111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipqlckohkaxqiihhuyvaxddehbyleflu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101015.4778419-55-187647465383433/AnsiballZ_command.py'
Nov 25 20:03:36 compute-0 sudo[69111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:36 compute-0 python3.9[69113]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ydysjqbg' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:03:36 compute-0 sudo[69111]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:36 compute-0 sudo[69265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cktzbuezgcotuttjiktjrmnwheureekh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101016.4245095-63-19161481600347/AnsiballZ_file.py'
Nov 25 20:03:36 compute-0 sudo[69265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:37 compute-0 python3.9[69267]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ydysjqbg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:37 compute-0 sudo[69265]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:37 compute-0 sshd-session[68350]: Connection closed by 192.168.122.30 port 43498
Nov 25 20:03:37 compute-0 sshd-session[68347]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:03:37 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 20:03:37 compute-0 systemd[1]: session-15.scope: Consumed 4.072s CPU time.
Nov 25 20:03:37 compute-0 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Nov 25 20:03:37 compute-0 systemd-logind[789]: Removed session 15.
Nov 25 20:03:43 compute-0 sshd-session[69292]: Accepted publickey for zuul from 192.168.122.30 port 59352 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:03:43 compute-0 systemd-logind[789]: New session 16 of user zuul.
Nov 25 20:03:43 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 25 20:03:43 compute-0 sshd-session[69292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:03:44 compute-0 python3.9[69445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:03:45 compute-0 sudo[69599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muyaqyentcurzrsktmuhatjqfquqgalj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101024.8365724-32-190115988881145/AnsiballZ_systemd.py'
Nov 25 20:03:45 compute-0 sudo[69599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:45 compute-0 python3.9[69601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 20:03:45 compute-0 sudo[69599]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:46 compute-0 sudo[69753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aclvofmrwraafwrxshnbixbhsgwxmuyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101026.173568-40-231733442744104/AnsiballZ_systemd.py'
Nov 25 20:03:46 compute-0 sudo[69753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:46 compute-0 python3.9[69755]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:03:46 compute-0 sudo[69753]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:47 compute-0 sudo[69906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcuzhlyifitaqvuhwsabrxyxljdvtfoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101027.192555-49-108171628720995/AnsiballZ_command.py'
Nov 25 20:03:47 compute-0 sudo[69906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:47 compute-0 python3.9[69908]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:03:47 compute-0 sudo[69906]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:48 compute-0 sudo[70059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hebpmtdqtwtyhwtzzkisbroondlzlhwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101028.1207745-57-5543428815976/AnsiballZ_stat.py'
Nov 25 20:03:48 compute-0 sudo[70059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:48 compute-0 python3.9[70061]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:03:48 compute-0 sudo[70059]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:49 compute-0 sudo[70213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcxpndunfudhzhzqcgbijwfsjrhiwptn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101029.0431097-65-200101984650473/AnsiballZ_command.py'
Nov 25 20:03:49 compute-0 sudo[70213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:49 compute-0 python3.9[70215]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:03:49 compute-0 sudo[70213]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:50 compute-0 sudo[70368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abctnmyrvnhtjggnwqboogmpepjzbpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101029.8724697-73-212424155421245/AnsiballZ_file.py'
Nov 25 20:03:50 compute-0 sudo[70368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:50 compute-0 python3.9[70370]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:03:50 compute-0 sudo[70368]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:51 compute-0 sshd-session[69295]: Connection closed by 192.168.122.30 port 59352
Nov 25 20:03:51 compute-0 sshd-session[69292]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:03:51 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 20:03:51 compute-0 systemd[1]: session-16.scope: Consumed 5.405s CPU time.
Nov 25 20:03:51 compute-0 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Nov 25 20:03:51 compute-0 systemd-logind[789]: Removed session 16.
Nov 25 20:03:56 compute-0 sshd-session[70395]: Accepted publickey for zuul from 192.168.122.30 port 33298 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:03:56 compute-0 systemd-logind[789]: New session 17 of user zuul.
Nov 25 20:03:56 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 25 20:03:56 compute-0 sshd-session[70395]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:03:57 compute-0 python3.9[70548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:03:58 compute-0 sudo[70702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyefwuxhuorwhlaqordzvgxshehvlbee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101038.2056282-34-268572630030260/AnsiballZ_setup.py'
Nov 25 20:03:58 compute-0 sudo[70702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:58 compute-0 python3.9[70704]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:03:59 compute-0 sudo[70702]: pam_unix(sudo:session): session closed for user root
Nov 25 20:03:59 compute-0 sudo[70786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvokuqoohqtsapvhtsjngcjsvtlbkhcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101038.2056282-34-268572630030260/AnsiballZ_dnf.py'
Nov 25 20:03:59 compute-0 sudo[70786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:03:59 compute-0 python3.9[70788]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 20:04:01 compute-0 sudo[70786]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:02 compute-0 python3.9[70939]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:03 compute-0 python3.9[71090]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:04:04 compute-0 python3.9[71240]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:04:04 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:04:04 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:04:05 compute-0 python3.9[71391]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:04:05 compute-0 sshd-session[70398]: Connection closed by 192.168.122.30 port 33298
Nov 25 20:04:05 compute-0 sshd-session[70395]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:04:05 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 20:04:05 compute-0 systemd[1]: session-17.scope: Consumed 6.657s CPU time.
Nov 25 20:04:05 compute-0 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Nov 25 20:04:05 compute-0 systemd-logind[789]: Removed session 17.
Nov 25 20:04:12 compute-0 sshd-session[71416]: Accepted publickey for zuul from 38.102.83.150 port 52102 ssh2: RSA SHA256:A2IzWGkyPIJ9qDfl3onK8K/RA0W663rQ8oKe3YJ11n4
Nov 25 20:04:12 compute-0 systemd-logind[789]: New session 18 of user zuul.
Nov 25 20:04:12 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 25 20:04:12 compute-0 sshd-session[71416]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:04:13 compute-0 sudo[71492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkpbevyqapxcsvdokpjzlkgdjiqieca ; /usr/bin/python3'
Nov 25 20:04:13 compute-0 sudo[71492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:13 compute-0 useradd[71496]: new group: name=ceph-admin, GID=42478
Nov 25 20:04:13 compute-0 useradd[71496]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 25 20:04:13 compute-0 sudo[71492]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:13 compute-0 sudo[71578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icqmpfdmjsztujdrztqtpsvvkcdbjujx ; /usr/bin/python3'
Nov 25 20:04:13 compute-0 sudo[71578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:14 compute-0 sudo[71578]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:14 compute-0 sudo[71651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpeodinzicrheuunqhcyujwkucewiffn ; /usr/bin/python3'
Nov 25 20:04:14 compute-0 sudo[71651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:14 compute-0 sudo[71651]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:14 compute-0 sudo[71701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbdrbqxbmivgtaupwbpynldiuwgztgnd ; /usr/bin/python3'
Nov 25 20:04:14 compute-0 sudo[71701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:15 compute-0 sudo[71701]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:15 compute-0 sudo[71727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vviracdmudilextxoetalzdkxdtyayuc ; /usr/bin/python3'
Nov 25 20:04:15 compute-0 sudo[71727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:15 compute-0 sudo[71727]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:15 compute-0 sudo[71753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhemxxnutnnmirqiddjmdvpvoqkxdvcl ; /usr/bin/python3'
Nov 25 20:04:15 compute-0 sudo[71753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:15 compute-0 sudo[71753]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:16 compute-0 sudo[71779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffputimsvaffptmlmdslcrqhwlpdzlpf ; /usr/bin/python3'
Nov 25 20:04:16 compute-0 sudo[71779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:16 compute-0 sudo[71779]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:16 compute-0 sudo[71857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvhbhckpawkegkowevuabvzpuszemou ; /usr/bin/python3'
Nov 25 20:04:16 compute-0 sudo[71857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:16 compute-0 sudo[71857]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:17 compute-0 sudo[71930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmxcrykjgoevoyvwqdtwgowmituvjyv ; /usr/bin/python3'
Nov 25 20:04:17 compute-0 sudo[71930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:17 compute-0 sudo[71930]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:17 compute-0 sudo[72032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxbquuvigeolitzxuuafrkrbddbnoxzu ; /usr/bin/python3'
Nov 25 20:04:17 compute-0 sudo[72032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:18 compute-0 sudo[72032]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:18 compute-0 sudo[72105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujuiewfjpvifhqeaycdtaywjqsjisgnp ; /usr/bin/python3'
Nov 25 20:04:18 compute-0 sudo[72105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:18 compute-0 sudo[72105]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:18 compute-0 sudo[72155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldiqxjsywaceljidqfwimujikiqdkpmf ; /usr/bin/python3'
Nov 25 20:04:18 compute-0 sudo[72155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:19 compute-0 python3[72157]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:04:20 compute-0 sudo[72155]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:20 compute-0 sudo[72250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbbnazgnixwmqlvzaigxhuljtxbjjfnv ; /usr/bin/python3'
Nov 25 20:04:20 compute-0 sudo[72250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:21 compute-0 python3[72252]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 20:04:21 compute-0 chronyd[58628]: Selected source 216.232.132.102 (pool.ntp.org)
Nov 25 20:04:22 compute-0 sudo[72250]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:22 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktbecbmmexytqvpcycvwtahegwafegst ; /usr/bin/python3'
Nov 25 20:04:22 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:22 compute-0 python3[72279]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:22 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:22 compute-0 sudo[72303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgrcjqkyxufarglnazcacmxgoxgbzya ; /usr/bin/python3'
Nov 25 20:04:22 compute-0 sudo[72303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:23 compute-0 python3[72305]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:23 compute-0 kernel: loop: module loaded
Nov 25 20:04:23 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 20:04:23 compute-0 sudo[72303]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:23 compute-0 sudo[72338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfciipgdnzxvvevykynjwmndpxmdghsz ; /usr/bin/python3'
Nov 25 20:04:23 compute-0 sudo[72338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:23 compute-0 python3[72340]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:23 compute-0 lvm[72343]: PV /dev/loop3 not used.
Nov 25 20:04:23 compute-0 lvm[72345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 20:04:23 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 20:04:23 compute-0 lvm[72348]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 20:04:23 compute-0 lvm[72355]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 20:04:23 compute-0 lvm[72355]: VG ceph_vg0 finished
Nov 25 20:04:23 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 20:04:23 compute-0 sudo[72338]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:24 compute-0 sudo[72431]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxnhzoknctyywscspbxlmlkgmjrebjn ; /usr/bin/python3'
Nov 25 20:04:24 compute-0 sudo[72431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:24 compute-0 python3[72433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 20:04:24 compute-0 sudo[72431]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:24 compute-0 sudo[72504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tptraexpjhkvhtzezkavlwugpbfqiozh ; /usr/bin/python3'
Nov 25 20:04:24 compute-0 sudo[72504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:24 compute-0 python3[72506]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101063.9226153-36262-127071593689945/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:24 compute-0 sudo[72504]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:25 compute-0 sudo[72554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ononlodstzuyjrlaihvgvxyjxxczuuof ; /usr/bin/python3'
Nov 25 20:04:25 compute-0 sudo[72554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:25 compute-0 python3[72556]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:04:25 compute-0 systemd[1]: Reloading.
Nov 25 20:04:25 compute-0 systemd-sysv-generator[72586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:25 compute-0 systemd-rc-local-generator[72582]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:25 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 20:04:25 compute-0 bash[72597]: /dev/loop3: [64513]:4327754 (/var/lib/ceph-osd-0.img)
Nov 25 20:04:25 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 20:04:25 compute-0 sudo[72554]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:25 compute-0 lvm[72598]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 20:04:25 compute-0 lvm[72598]: VG ceph_vg0 finished
Nov 25 20:04:26 compute-0 sudo[72622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdpymadabhntrdylpfbeqerwwvxallqb ; /usr/bin/python3'
Nov 25 20:04:26 compute-0 sudo[72622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:26 compute-0 python3[72624]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 20:04:27 compute-0 sudo[72622]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:27 compute-0 sudo[72649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwlndzpetrrbcnvuoetfxixkodazveo ; /usr/bin/python3'
Nov 25 20:04:27 compute-0 sudo[72649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:27 compute-0 python3[72651]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:27 compute-0 sudo[72649]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:28 compute-0 sudo[72675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmzoehoizjsvhfrocrlpsxxtokzuboic ; /usr/bin/python3'
Nov 25 20:04:28 compute-0 sudo[72675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:28 compute-0 python3[72677]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:28 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 25 20:04:28 compute-0 sudo[72675]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:28 compute-0 sudo[72707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzcotayqikoriixxfwkpezawdzbqcjd ; /usr/bin/python3'
Nov 25 20:04:28 compute-0 sudo[72707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:28 compute-0 python3[72709]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:28 compute-0 lvm[72712]: PV /dev/loop4 not used.
Nov 25 20:04:28 compute-0 lvm[72722]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 20:04:29 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 25 20:04:29 compute-0 sudo[72707]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:29 compute-0 lvm[72724]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 25 20:04:29 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 25 20:04:29 compute-0 sudo[72800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyulykvmzkosgnacitltlbywyhfooud ; /usr/bin/python3'
Nov 25 20:04:29 compute-0 sudo[72800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:29 compute-0 python3[72802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 20:04:29 compute-0 sudo[72800]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:30 compute-0 sudo[72873]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvysrhdjkuvrdsyqtgglbbqseavrjfo ; /usr/bin/python3'
Nov 25 20:04:30 compute-0 sudo[72873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:30 compute-0 python3[72875]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101069.1575155-36289-129726441159237/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:30 compute-0 sudo[72873]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:30 compute-0 sudo[72923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftdolaajmwgzmhowxookrrhwlciupgzd ; /usr/bin/python3'
Nov 25 20:04:30 compute-0 sudo[72923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:30 compute-0 python3[72925]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:04:30 compute-0 systemd[1]: Reloading.
Nov 25 20:04:30 compute-0 systemd-rc-local-generator[72946]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:30 compute-0 systemd-sysv-generator[72950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:31 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 20:04:31 compute-0 bash[72965]: /dev/loop4: [64513]:4327909 (/var/lib/ceph-osd-1.img)
Nov 25 20:04:31 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 20:04:31 compute-0 lvm[72966]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 20:04:31 compute-0 lvm[72966]: VG ceph_vg1 finished
Nov 25 20:04:31 compute-0 sudo[72923]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:31 compute-0 sudo[72990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdjpzhompdfwgpbnaeuwdgimsnngoqi ; /usr/bin/python3'
Nov 25 20:04:31 compute-0 sudo[72990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:31 compute-0 python3[72992]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 20:04:32 compute-0 sudo[72990]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:32 compute-0 sudo[73017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfayxblesnzbxlfkoohwzyhnrbtejkoq ; /usr/bin/python3'
Nov 25 20:04:32 compute-0 sudo[73017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:33 compute-0 python3[73019]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:33 compute-0 sudo[73017]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:33 compute-0 sudo[73043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xumanydfpbnljuichyqlsvyqizypcmsl ; /usr/bin/python3'
Nov 25 20:04:33 compute-0 sudo[73043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:33 compute-0 python3[73045]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:33 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 25 20:04:33 compute-0 sudo[73043]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:33 compute-0 sudo[73075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwngoxydxcjgrirshmgqsojhdnkrpdld ; /usr/bin/python3'
Nov 25 20:04:33 compute-0 sudo[73075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:33 compute-0 python3[73077]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:33 compute-0 lvm[73080]: PV /dev/loop5 not used.
Nov 25 20:04:34 compute-0 lvm[73082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 20:04:34 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 25 20:04:34 compute-0 lvm[73093]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 20:04:34 compute-0 lvm[73093]: VG ceph_vg2 finished
Nov 25 20:04:34 compute-0 lvm[73090]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 25 20:04:34 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 25 20:04:34 compute-0 sudo[73075]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:34 compute-0 sudo[73169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnsdvlnuflglaqyvwowwqhizfqglbrvs ; /usr/bin/python3'
Nov 25 20:04:34 compute-0 sudo[73169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:34 compute-0 python3[73171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 20:04:34 compute-0 sudo[73169]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:35 compute-0 sudo[73242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syvtkegmaanscdltselmcclxuihtgbco ; /usr/bin/python3'
Nov 25 20:04:35 compute-0 sudo[73242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:35 compute-0 python3[73244]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101074.41693-36316-52785632158699/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:35 compute-0 sudo[73242]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:35 compute-0 sudo[73292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvfhvdscqtygxltzwxezhjrelkulrah ; /usr/bin/python3'
Nov 25 20:04:35 compute-0 sudo[73292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:35 compute-0 python3[73294]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:04:35 compute-0 systemd[1]: Reloading.
Nov 25 20:04:35 compute-0 systemd-sysv-generator[73323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:35 compute-0 systemd-rc-local-generator[73318]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:36 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 20:04:36 compute-0 bash[73335]: /dev/loop5: [64513]:4327911 (/var/lib/ceph-osd-2.img)
Nov 25 20:04:36 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 20:04:36 compute-0 lvm[73336]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 20:04:36 compute-0 lvm[73336]: VG ceph_vg2 finished
Nov 25 20:04:36 compute-0 sudo[73292]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:38 compute-0 python3[73360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:04:40 compute-0 sudo[73451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jetdosxhzezyuzgpgfehrqibihglyqrh ; /usr/bin/python3'
Nov 25 20:04:40 compute-0 sudo[73451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:40 compute-0 python3[73453]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 20:04:42 compute-0 groupadd[73459]: group added to /etc/group: name=cephadm, GID=992
Nov 25 20:04:42 compute-0 groupadd[73459]: group added to /etc/gshadow: name=cephadm
Nov 25 20:04:42 compute-0 groupadd[73459]: new group: name=cephadm, GID=992
Nov 25 20:04:42 compute-0 useradd[73466]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 25 20:04:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:04:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:04:42 compute-0 sudo[73451]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:42 compute-0 sudo[73561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhbmjvysgxgesdzyjfoufpphcexpkytc ; /usr/bin/python3'
Nov 25 20:04:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:04:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:04:42 compute-0 systemd[1]: run-rc25585f9f9fc412a9aa8ea7bd8afc80b.service: Deactivated successfully.
Nov 25 20:04:42 compute-0 sudo[73561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:43 compute-0 python3[73564]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:43 compute-0 sudo[73561]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:43 compute-0 sudo[73590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwwtqewvqhyuswbtfylduwargudipbcl ; /usr/bin/python3'
Nov 25 20:04:43 compute-0 sudo[73590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:43 compute-0 python3[73592]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:04:43 compute-0 sudo[73590]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:44 compute-0 sudo[73653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anaxugokyqbgagyfxsxeutaysniqvwof ; /usr/bin/python3'
Nov 25 20:04:44 compute-0 sudo[73653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:44 compute-0 python3[73655]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:44 compute-0 sudo[73653]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:04:44 compute-0 sudo[73679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erlnwietowoawcvawjuzvduwmnwmsnup ; /usr/bin/python3'
Nov 25 20:04:44 compute-0 sudo[73679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:44 compute-0 python3[73681]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:44 compute-0 sudo[73679]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:45 compute-0 sudo[73757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwhiinpdgdudwzvddfkrvigjjbxwiueo ; /usr/bin/python3'
Nov 25 20:04:45 compute-0 sudo[73757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:45 compute-0 python3[73759]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 20:04:45 compute-0 sudo[73757]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:46 compute-0 sudo[73830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmoibauwxazccaqxphdgnnxhozvqgwwk ; /usr/bin/python3'
Nov 25 20:04:46 compute-0 sudo[73830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:46 compute-0 python3[73832]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101085.4552474-36463-132033069564406/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:46 compute-0 sudo[73830]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:46 compute-0 sudo[73932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybqgfgaqeenusgphouryenteazgoudks ; /usr/bin/python3'
Nov 25 20:04:46 compute-0 sudo[73932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:47 compute-0 python3[73934]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 20:04:47 compute-0 sudo[73932]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:47 compute-0 sudo[74005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqtwokdyswvlramhumkxleebootgryvr ; /usr/bin/python3'
Nov 25 20:04:47 compute-0 sudo[74005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:47 compute-0 python3[74007]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101086.750813-36481-16564164498451/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:47 compute-0 sudo[74005]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:47 compute-0 sudo[74055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiyptebnphgtbexazscgjpddirttsnpv ; /usr/bin/python3'
Nov 25 20:04:47 compute-0 sudo[74055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:47 compute-0 python3[74057]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:47 compute-0 sudo[74055]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:48 compute-0 sudo[74083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brjhtwoikfexbdcyzsuebiijoxmchsod ; /usr/bin/python3'
Nov 25 20:04:48 compute-0 sudo[74083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:48 compute-0 python3[74085]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:48 compute-0 sudo[74083]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:48 compute-0 sudo[74111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urniuirnouhiybcaujugjsrnvfpplmnb ; /usr/bin/python3'
Nov 25 20:04:48 compute-0 sudo[74111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:48 compute-0 python3[74113]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:04:48 compute-0 sudo[74111]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:49 compute-0 sudo[74139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oznaieinlpguqknulzntqbshqaxiwvtj ; /usr/bin/python3'
Nov 25 20:04:49 compute-0 sudo[74139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:04:49 compute-0 python3[74141]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:04:49 compute-0 sshd-session[74157]: Accepted publickey for ceph-admin from 192.168.122.100 port 53232 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:04:49 compute-0 systemd-logind[789]: New session 19 of user ceph-admin.
Nov 25 20:04:49 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 20:04:49 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 20:04:49 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 20:04:49 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 20:04:49 compute-0 systemd[74161]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:04:49 compute-0 systemd[74161]: Queued start job for default target Main User Target.
Nov 25 20:04:49 compute-0 systemd[74161]: Created slice User Application Slice.
Nov 25 20:04:49 compute-0 systemd[74161]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 20:04:49 compute-0 systemd[74161]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 20:04:49 compute-0 systemd[74161]: Reached target Paths.
Nov 25 20:04:49 compute-0 systemd[74161]: Reached target Timers.
Nov 25 20:04:49 compute-0 systemd[74161]: Starting D-Bus User Message Bus Socket...
Nov 25 20:04:49 compute-0 systemd[74161]: Starting Create User's Volatile Files and Directories...
Nov 25 20:04:49 compute-0 systemd[74161]: Listening on D-Bus User Message Bus Socket.
Nov 25 20:04:49 compute-0 systemd[74161]: Reached target Sockets.
Nov 25 20:04:49 compute-0 systemd[74161]: Finished Create User's Volatile Files and Directories.
Nov 25 20:04:49 compute-0 systemd[74161]: Reached target Basic System.
Nov 25 20:04:49 compute-0 systemd[74161]: Reached target Main User Target.
Nov 25 20:04:49 compute-0 systemd[74161]: Startup finished in 158ms.
Nov 25 20:04:49 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 20:04:49 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 20:04:49 compute-0 sshd-session[74157]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:04:49 compute-0 sudo[74177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 25 20:04:50 compute-0 sudo[74177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:04:50 compute-0 sudo[74177]: pam_unix(sudo:session): session closed for user root
Nov 25 20:04:50 compute-0 sshd-session[74176]: Received disconnect from 192.168.122.100 port 53232:11: disconnected by user
Nov 25 20:04:50 compute-0 sshd-session[74176]: Disconnected from user ceph-admin 192.168.122.100 port 53232
Nov 25 20:04:50 compute-0 sshd-session[74157]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 20:04:50 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 20:04:50 compute-0 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Nov 25 20:04:50 compute-0 systemd-logind[789]: Removed session 19.
Nov 25 20:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2820861538-merged.mount: Deactivated successfully.
Nov 25 20:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2820861538-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 20:05:00 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 20:05:00 compute-0 systemd[74161]: Activating special unit Exit the Session...
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped target Main User Target.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped target Basic System.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped target Paths.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped target Sockets.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped target Timers.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 20:05:00 compute-0 systemd[74161]: Closed D-Bus User Message Bus Socket.
Nov 25 20:05:00 compute-0 systemd[74161]: Stopped Create User's Volatile Files and Directories.
Nov 25 20:05:00 compute-0 systemd[74161]: Removed slice User Application Slice.
Nov 25 20:05:00 compute-0 systemd[74161]: Reached target Shutdown.
Nov 25 20:05:00 compute-0 systemd[74161]: Finished Exit the Session.
Nov 25 20:05:00 compute-0 systemd[74161]: Reached target Exit the Session.
Nov 25 20:05:00 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 20:05:00 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 20:05:00 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 20:05:00 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 20:05:00 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 20:05:00 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 20:05:00 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 20:05:03 compute-0 podman[74214]: 2025-11-25 20:05:03.435731683 +0000 UTC m=+13.334218810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:03 compute-0 podman[74273]: 2025-11-25 20:05:03.509262854 +0000 UTC m=+0.051550434 container create ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b (image=quay.io/ceph/ceph:v18, name=lucid_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2839490306-merged.mount: Deactivated successfully.
Nov 25 20:05:03 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 20:05:03 compute-0 systemd[1]: Started libpod-conmon-ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b.scope.
Nov 25 20:05:03 compute-0 podman[74273]: 2025-11-25 20:05:03.479993337 +0000 UTC m=+0.022280957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:03 compute-0 podman[74273]: 2025-11-25 20:05:03.611360831 +0000 UTC m=+0.153648491 container init ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b (image=quay.io/ceph/ceph:v18, name=lucid_kalam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:05:03 compute-0 podman[74273]: 2025-11-25 20:05:03.61866191 +0000 UTC m=+0.160949500 container start ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b (image=quay.io/ceph/ceph:v18, name=lucid_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:03 compute-0 podman[74273]: 2025-11-25 20:05:03.622162225 +0000 UTC m=+0.164449825 container attach ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b (image=quay.io/ceph/ceph:v18, name=lucid_kalam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:03 compute-0 lucid_kalam[74289]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 20:05:03 compute-0 systemd[1]: libpod-ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b.scope: Deactivated successfully.
Nov 25 20:05:03 compute-0 podman[74273]: 2025-11-25 20:05:03.944836003 +0000 UTC m=+0.487123573 container died ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b (image=quay.io/ceph/ceph:v18, name=lucid_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:05:04 compute-0 podman[74273]: 2025-11-25 20:05:04.001361921 +0000 UTC m=+0.543649521 container remove ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b (image=quay.io/ceph/ceph:v18, name=lucid_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:04 compute-0 systemd[1]: libpod-conmon-ce17ebd7c63f5c498119d4c8e6cc5b3da5686c87382543545ad57e443112679b.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.069944447 +0000 UTC m=+0.043188136 container create 8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339 (image=quay.io/ceph/ceph:v18, name=magical_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.051835874 +0000 UTC m=+0.025079583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:04 compute-0 systemd[1]: Started libpod-conmon-8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339.scope.
Nov 25 20:05:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.190058715 +0000 UTC m=+0.163302424 container init 8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339 (image=quay.io/ceph/ceph:v18, name=magical_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.199196623 +0000 UTC m=+0.172440312 container start 8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339 (image=quay.io/ceph/ceph:v18, name=magical_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:04 compute-0 magical_haslett[74322]: 167 167
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.202627786 +0000 UTC m=+0.175871505 container attach 8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339 (image=quay.io/ceph/ceph:v18, name=magical_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:05:04 compute-0 systemd[1]: libpod-8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.204373694 +0000 UTC m=+0.177617423 container died 8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339 (image=quay.io/ceph/ceph:v18, name=magical_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:05:04 compute-0 podman[74306]: 2025-11-25 20:05:04.267530492 +0000 UTC m=+0.240774171 container remove 8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339 (image=quay.io/ceph/ceph:v18, name=magical_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:05:04 compute-0 systemd[1]: libpod-conmon-8969416e1fe588813f4d2906e444bbb2a1f02ef4b057999624e39dff2e693339.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.348874425 +0000 UTC m=+0.053788154 container create 61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9 (image=quay.io/ceph/ceph:v18, name=infallible_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:05:04 compute-0 systemd[1]: Started libpod-conmon-61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9.scope.
Nov 25 20:05:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.326497276 +0000 UTC m=+0.031410985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.429281013 +0000 UTC m=+0.134194782 container init 61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9 (image=quay.io/ceph/ceph:v18, name=infallible_hopper, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.440296642 +0000 UTC m=+0.145210371 container start 61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9 (image=quay.io/ceph/ceph:v18, name=infallible_hopper, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ce33d0c0fb38073a15dc6499dbe99dfddad8916c768d8a9fdebc608d25efdf-merged.mount: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.445159235 +0000 UTC m=+0.150072964 container attach 61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9 (image=quay.io/ceph/ceph:v18, name=infallible_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:05:04 compute-0 infallible_hopper[74354]: AQDwCyZpSMC8GxAAMY2GpbFawseNRVfhhM2INA==
Nov 25 20:05:04 compute-0 systemd[1]: libpod-61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.469492396 +0000 UTC m=+0.174406125 container died 61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9 (image=quay.io/ceph/ceph:v18, name=infallible_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1a9d4aab9553eb0fa7af65036cfa46ea6909f79721f88d1ad83a4366ab82f3-merged.mount: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74337]: 2025-11-25 20:05:04.519351173 +0000 UTC m=+0.224264902 container remove 61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9 (image=quay.io/ceph/ceph:v18, name=infallible_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 20:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:04 compute-0 systemd[1]: libpod-conmon-61ae575ce4b79c6afb9323f74b2744f231a18737603bd3626032884a2f53f5f9.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.60267961 +0000 UTC m=+0.056134418 container create b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c (image=quay.io/ceph/ceph:v18, name=exciting_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:05:04 compute-0 systemd[1]: Started libpod-conmon-b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c.scope.
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.576677263 +0000 UTC m=+0.030132131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.686855 +0000 UTC m=+0.140309838 container init b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c (image=quay.io/ceph/ceph:v18, name=exciting_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.697614673 +0000 UTC m=+0.151069471 container start b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c (image=quay.io/ceph/ceph:v18, name=exciting_mcnulty, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.702298909 +0000 UTC m=+0.155753667 container attach b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c (image=quay.io/ceph/ceph:v18, name=exciting_mcnulty, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:05:04 compute-0 exciting_mcnulty[74388]: AQDwCyZpecw7KxAAh5tXiJHMCUKax16DcDZYMg==
Nov 25 20:05:04 compute-0 systemd[1]: libpod-b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.729862269 +0000 UTC m=+0.183317067 container died b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c (image=quay.io/ceph/ceph:v18, name=exciting_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:05:04 compute-0 podman[74372]: 2025-11-25 20:05:04.783409446 +0000 UTC m=+0.236864254 container remove b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c (image=quay.io/ceph/ceph:v18, name=exciting_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:04 compute-0 systemd[1]: libpod-conmon-b4b3fa7def06beee136807b0e37051e479379fbeb0690dfaa8c4835ec195346c.scope: Deactivated successfully.
Nov 25 20:05:04 compute-0 podman[74407]: 2025-11-25 20:05:04.875496332 +0000 UTC m=+0.060306042 container create 21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf (image=quay.io/ceph/ceph:v18, name=cranky_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:05:04 compute-0 systemd[1]: Started libpod-conmon-21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf.scope.
Nov 25 20:05:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:04 compute-0 podman[74407]: 2025-11-25 20:05:04.856599528 +0000 UTC m=+0.041409258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:05 compute-0 podman[74407]: 2025-11-25 20:05:05.105522169 +0000 UTC m=+0.290331899 container init 21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf (image=quay.io/ceph/ceph:v18, name=cranky_hopper, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:05:05 compute-0 podman[74407]: 2025-11-25 20:05:05.115044249 +0000 UTC m=+0.299853969 container start 21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf (image=quay.io/ceph/ceph:v18, name=cranky_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:05:05 compute-0 podman[74407]: 2025-11-25 20:05:05.119376236 +0000 UTC m=+0.304186026 container attach 21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf (image=quay.io/ceph/ceph:v18, name=cranky_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:05:05 compute-0 cranky_hopper[74424]: AQDxCyZpFQPiCBAApl0yybncI/56nPQInL6RJA==
Nov 25 20:05:05 compute-0 systemd[1]: libpod-21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf.scope: Deactivated successfully.
Nov 25 20:05:05 compute-0 podman[74407]: 2025-11-25 20:05:05.154205034 +0000 UTC m=+0.339014754 container died 21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf (image=quay.io/ceph/ceph:v18, name=cranky_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:05 compute-0 podman[74407]: 2025-11-25 20:05:05.199777254 +0000 UTC m=+0.384586984 container remove 21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf (image=quay.io/ceph/ceph:v18, name=cranky_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:05:05 compute-0 systemd[1]: libpod-conmon-21f8daa2c1d661a6a7c453c711db012ccd5be3c4799b55218aeb981c6cc70daf.scope: Deactivated successfully.
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.303199117 +0000 UTC m=+0.066919911 container create 6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617 (image=quay.io/ceph/ceph:v18, name=nifty_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:05 compute-0 systemd[1]: Started libpod-conmon-6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617.scope.
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.275554615 +0000 UTC m=+0.039275449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa38d5e54bd5ef78c9ed62bb7bbb982ec023e8480e73156faec1f6ea25b34abd/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.404064041 +0000 UTC m=+0.167784875 container init 6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617 (image=quay.io/ceph/ceph:v18, name=nifty_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.410194978 +0000 UTC m=+0.173915772 container start 6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617 (image=quay.io/ceph/ceph:v18, name=nifty_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.413856867 +0000 UTC m=+0.177577651 container attach 6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617 (image=quay.io/ceph/ceph:v18, name=nifty_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:05 compute-0 nifty_turing[74461]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 25 20:05:05 compute-0 nifty_turing[74461]: setting min_mon_release = pacific
Nov 25 20:05:05 compute-0 nifty_turing[74461]: /usr/bin/monmaptool: set fsid to 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:05 compute-0 nifty_turing[74461]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 25 20:05:05 compute-0 systemd[1]: libpod-6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617.scope: Deactivated successfully.
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.447565935 +0000 UTC m=+0.211286729 container died 6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617 (image=quay.io/ceph/ceph:v18, name=nifty_turing, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa38d5e54bd5ef78c9ed62bb7bbb982ec023e8480e73156faec1f6ea25b34abd-merged.mount: Deactivated successfully.
Nov 25 20:05:05 compute-0 podman[74444]: 2025-11-25 20:05:05.496317861 +0000 UTC m=+0.260038655 container remove 6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617 (image=quay.io/ceph/ceph:v18, name=nifty_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:05 compute-0 systemd[1]: libpod-conmon-6d3d39601134db2fa33b452c8b4f19d585dd7bf5f87c27736afe0bcdb5d1b617.scope: Deactivated successfully.
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.574679443 +0000 UTC m=+0.049210670 container create eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0 (image=quay.io/ceph/ceph:v18, name=peaceful_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:05:05 compute-0 systemd[1]: Started libpod-conmon-eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0.scope.
Nov 25 20:05:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e710b67d0d33288fbeef136a4be51b2044c74671540fa9d249f3f2b93ae9fb42/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e710b67d0d33288fbeef136a4be51b2044c74671540fa9d249f3f2b93ae9fb42/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e710b67d0d33288fbeef136a4be51b2044c74671540fa9d249f3f2b93ae9fb42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e710b67d0d33288fbeef136a4be51b2044c74671540fa9d249f3f2b93ae9fb42/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.553886737 +0000 UTC m=+0.028417994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.652939952 +0000 UTC m=+0.127471249 container init eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0 (image=quay.io/ceph/ceph:v18, name=peaceful_gauss, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.659401608 +0000 UTC m=+0.133932865 container start eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0 (image=quay.io/ceph/ceph:v18, name=peaceful_gauss, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.663328515 +0000 UTC m=+0.137859772 container attach eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0 (image=quay.io/ceph/ceph:v18, name=peaceful_gauss, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 20:05:05 compute-0 systemd[1]: libpod-eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0.scope: Deactivated successfully.
Nov 25 20:05:05 compute-0 conmon[74495]: conmon eba2c7afa0220c98a9dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0.scope/container/memory.events
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.778249731 +0000 UTC m=+0.252780988 container died eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0 (image=quay.io/ceph/ceph:v18, name=peaceful_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:05:05 compute-0 podman[74479]: 2025-11-25 20:05:05.819415961 +0000 UTC m=+0.293947218 container remove eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0 (image=quay.io/ceph/ceph:v18, name=peaceful_gauss, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:05:05 compute-0 systemd[1]: libpod-conmon-eba2c7afa0220c98a9dd0424387437da9558911f7ab3500d668326d75cb4c9f0.scope: Deactivated successfully.
Nov 25 20:05:05 compute-0 systemd[1]: Reloading.
Nov 25 20:05:06 compute-0 systemd-rc-local-generator[74562]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:06 compute-0 systemd-sysv-generator[74566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:06 compute-0 systemd[1]: Reloading.
Nov 25 20:05:06 compute-0 systemd-sysv-generator[74601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:06 compute-0 systemd-rc-local-generator[74596]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:06 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 20:05:06 compute-0 systemd[1]: Reloading.
Nov 25 20:05:06 compute-0 systemd-rc-local-generator[74632]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:06 compute-0 systemd-sysv-generator[74637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:06 compute-0 systemd[1]: Reached target Ceph cluster 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:05:06 compute-0 systemd[1]: Reloading.
Nov 25 20:05:06 compute-0 systemd-sysv-generator[74677]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:06 compute-0 systemd-rc-local-generator[74673]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:07 compute-0 systemd[1]: Reloading.
Nov 25 20:05:07 compute-0 systemd-rc-local-generator[74718]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:07 compute-0 systemd-sysv-generator[74721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:07 compute-0 systemd[1]: Created slice Slice /system/ceph-712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:05:07 compute-0 systemd[1]: Reached target System Time Set.
Nov 25 20:05:07 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 25 20:05:07 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:07 compute-0 podman[74772]: 2025-11-25 20:05:07.629304299 +0000 UTC m=+0.074228180 container create b46473b2328cf7602f3b4de685ec985d20ebcb6c26683ca0bdffa915b01cb94c (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:05:07 compute-0 podman[74772]: 2025-11-25 20:05:07.598626394 +0000 UTC m=+0.043550335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46aeced212b80c497cd6137af0c848a2064e182b2331f6ae0227e08f50c87539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46aeced212b80c497cd6137af0c848a2064e182b2331f6ae0227e08f50c87539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46aeced212b80c497cd6137af0c848a2064e182b2331f6ae0227e08f50c87539/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46aeced212b80c497cd6137af0c848a2064e182b2331f6ae0227e08f50c87539/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 podman[74772]: 2025-11-25 20:05:07.733116123 +0000 UTC m=+0.178040054 container init b46473b2328cf7602f3b4de685ec985d20ebcb6c26683ca0bdffa915b01cb94c (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:07 compute-0 podman[74772]: 2025-11-25 20:05:07.745591373 +0000 UTC m=+0.190515244 container start b46473b2328cf7602f3b4de685ec985d20ebcb6c26683ca0bdffa915b01cb94c (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:05:07 compute-0 bash[74772]: b46473b2328cf7602f3b4de685ec985d20ebcb6c26683ca0bdffa915b01cb94c
Nov 25 20:05:07 compute-0 systemd[1]: Started Ceph mon.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:05:07 compute-0 ceph-mon[74792]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: pidfile_write: ignore empty --pid-file
Nov 25 20:05:07 compute-0 ceph-mon[74792]: load: jerasure load: lrc 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Git sha 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: DB SUMMARY
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: DB Session ID:  JP9PRBVM01QXLZZLMMDH
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                                     Options.env: 0x55851ecb2c40
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                                Options.info_log: 0x558520feee80
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                                 Options.wal_dir: 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                    Options.write_buffer_manager: 0x558520ffeb40
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                               Options.row_cache: None
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                              Options.wal_filter: None
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.wal_compression: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.max_background_jobs: 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Compression algorithms supported:
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kZSTD supported: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:           Options.merge_operator: 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:        Options.compaction_filter: None
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558520feea80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558520fe71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.compression: NoCompression
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.num_levels: 7
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e268f949-cc37-4e61-bd9c-5215f99d2d7b
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101107812317, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101107814107, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "JP9PRBVM01QXLZZLMMDH", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101107814251, "job": 1, "event": "recovery_finished"}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558521010e00
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: DB pointer 0x55852109a000
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:05:07 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558520fe71f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:05:07 compute-0 ceph-mon[74792]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@-1(???) e0 preinit fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 25 20:05:07 compute-0 ceph-mon[74792]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 20:05:07 compute-0 ceph-mon[74792]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 20:05:07 compute-0 ceph-mon[74792]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 20:05:07 compute-0 ceph-mon[74792]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-25T20:05:05.721282Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).mds e1 new map
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 20:05:07 compute-0 ceph-mon[74792]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mkfs 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 25 20:05:07 compute-0 podman[74793]: 2025-11-25 20:05:07.867497659 +0000 UTC m=+0.065080552 container create 20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116 (image=quay.io/ceph/ceph:v18, name=upbeat_poitras, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:05:07 compute-0 ceph-mon[74792]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 20:05:07 compute-0 ceph-mon[74792]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 20:05:07 compute-0 ceph-mon[74792]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 20:05:07 compute-0 systemd[1]: Started libpod-conmon-20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116.scope.
Nov 25 20:05:07 compute-0 podman[74793]: 2025-11-25 20:05:07.846983131 +0000 UTC m=+0.044566064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8614c06ac02002d7e7da76a9db401372a1d620b6c74d5e95bf4f49ab3cccae4c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8614c06ac02002d7e7da76a9db401372a1d620b6c74d5e95bf4f49ab3cccae4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8614c06ac02002d7e7da76a9db401372a1d620b6c74d5e95bf4f49ab3cccae4c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:07 compute-0 podman[74793]: 2025-11-25 20:05:07.999082659 +0000 UTC m=+0.196665582 container init 20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116 (image=quay.io/ceph/ceph:v18, name=upbeat_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:05:08 compute-0 podman[74793]: 2025-11-25 20:05:08.015729072 +0000 UTC m=+0.213311995 container start 20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116 (image=quay.io/ceph/ceph:v18, name=upbeat_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:05:08 compute-0 podman[74793]: 2025-11-25 20:05:08.020044329 +0000 UTC m=+0.217627412 container attach 20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116 (image=quay.io/ceph/ceph:v18, name=upbeat_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:08 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 20:05:08 compute-0 ceph-mon[74792]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373975433' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:   cluster:
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     id:     712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     health: HEALTH_OK
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:  
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:   services:
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     mon: 1 daemons, quorum compute-0 (age 0.557792s)
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     mgr: no daemons active
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     osd: 0 osds: 0 up, 0 in
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:  
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:   data:
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     pools:   0 pools, 0 pgs
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     objects: 0 objects, 0 B
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     usage:   0 B used, 0 B / 0 B avail
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:     pgs:     
Nov 25 20:05:08 compute-0 upbeat_poitras[74849]:  
Nov 25 20:05:08 compute-0 systemd[1]: libpod-20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116.scope: Deactivated successfully.
Nov 25 20:05:08 compute-0 podman[74793]: 2025-11-25 20:05:08.423153156 +0000 UTC m=+0.620736039 container died 20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116 (image=quay.io/ceph/ceph:v18, name=upbeat_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:08 compute-0 podman[74793]: 2025-11-25 20:05:08.486033367 +0000 UTC m=+0.683616290 container remove 20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116 (image=quay.io/ceph/ceph:v18, name=upbeat_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:08 compute-0 systemd[1]: libpod-conmon-20060d60d63309d0ac2449e9d078bbc761f2e5a35b6c8112ff6abaffc7587116.scope: Deactivated successfully.
Nov 25 20:05:08 compute-0 podman[74885]: 2025-11-25 20:05:08.587918599 +0000 UTC m=+0.071037964 container create f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b (image=quay.io/ceph/ceph:v18, name=confident_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:08 compute-0 systemd[1]: Started libpod-conmon-f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b.scope.
Nov 25 20:05:08 compute-0 podman[74885]: 2025-11-25 20:05:08.556406311 +0000 UTC m=+0.039525726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebbeebd7d5d6fb3427a64ff3eca828b6587d52e2e9e7d7254e8465660d562b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebbeebd7d5d6fb3427a64ff3eca828b6587d52e2e9e7d7254e8465660d562b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebbeebd7d5d6fb3427a64ff3eca828b6587d52e2e9e7d7254e8465660d562b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebbeebd7d5d6fb3427a64ff3eca828b6587d52e2e9e7d7254e8465660d562b4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:08 compute-0 podman[74885]: 2025-11-25 20:05:08.710054422 +0000 UTC m=+0.193173877 container init f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b (image=quay.io/ceph/ceph:v18, name=confident_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:05:08 compute-0 podman[74885]: 2025-11-25 20:05:08.720103896 +0000 UTC m=+0.203223271 container start f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b (image=quay.io/ceph/ceph:v18, name=confident_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:08 compute-0 podman[74885]: 2025-11-25 20:05:08.724355911 +0000 UTC m=+0.207475276 container attach f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b (image=quay.io/ceph/ceph:v18, name=confident_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:08 compute-0 ceph-mon[74792]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 20:05:08 compute-0 ceph-mon[74792]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 20:05:08 compute-0 ceph-mon[74792]: fsmap 
Nov 25 20:05:08 compute-0 ceph-mon[74792]: osdmap e1: 0 total, 0 up, 0 in
Nov 25 20:05:08 compute-0 ceph-mon[74792]: mgrmap e1: no daemons active
Nov 25 20:05:08 compute-0 ceph-mon[74792]: from='client.? 192.168.122.100:0/373975433' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 20:05:09 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 20:05:09 compute-0 ceph-mon[74792]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4021919874' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 20:05:09 compute-0 ceph-mon[74792]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4021919874' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 20:05:09 compute-0 confident_villani[74902]: 
Nov 25 20:05:09 compute-0 confident_villani[74902]: [global]
Nov 25 20:05:09 compute-0 confident_villani[74902]:         fsid = 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:09 compute-0 confident_villani[74902]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 20:05:09 compute-0 confident_villani[74902]:         osd_crush_chooseleaf_type = 0
Nov 25 20:05:09 compute-0 systemd[1]: libpod-f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b.scope: Deactivated successfully.
Nov 25 20:05:09 compute-0 podman[74928]: 2025-11-25 20:05:09.208321067 +0000 UTC m=+0.032575537 container died f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b (image=quay.io/ceph/ceph:v18, name=confident_villani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebbeebd7d5d6fb3427a64ff3eca828b6587d52e2e9e7d7254e8465660d562b4-merged.mount: Deactivated successfully.
Nov 25 20:05:09 compute-0 podman[74928]: 2025-11-25 20:05:09.307542496 +0000 UTC m=+0.131796936 container remove f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b (image=quay.io/ceph/ceph:v18, name=confident_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:05:09 compute-0 systemd[1]: libpod-conmon-f0d4225a270b162a31f23a6c537349a6528f9915fc7c8049fd899cc1db1f245b.scope: Deactivated successfully.
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.392581081 +0000 UTC m=+0.057212248 container create 8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292 (image=quay.io/ceph/ceph:v18, name=heuristic_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:05:09 compute-0 systemd[1]: Started libpod-conmon-8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292.scope.
Nov 25 20:05:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.364045704 +0000 UTC m=+0.028676961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807e1d235570b082de013a049dca71709e29e0c36a0de336cb9358b0ef4824b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807e1d235570b082de013a049dca71709e29e0c36a0de336cb9358b0ef4824b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807e1d235570b082de013a049dca71709e29e0c36a0de336cb9358b0ef4824b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807e1d235570b082de013a049dca71709e29e0c36a0de336cb9358b0ef4824b0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.478691472 +0000 UTC m=+0.143322659 container init 8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292 (image=quay.io/ceph/ceph:v18, name=heuristic_burnell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.496674082 +0000 UTC m=+0.161305229 container start 8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292 (image=quay.io/ceph/ceph:v18, name=heuristic_burnell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.500880736 +0000 UTC m=+0.165511923 container attach 8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292 (image=quay.io/ceph/ceph:v18, name=heuristic_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:05:09 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:05:09 compute-0 ceph-mon[74792]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167210924' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:05:09 compute-0 ceph-mon[74792]: from='client.? 192.168.122.100:0/4021919874' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 20:05:09 compute-0 ceph-mon[74792]: from='client.? 192.168.122.100:0/4021919874' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 20:05:09 compute-0 systemd[1]: libpod-8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292.scope: Deactivated successfully.
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.902134053 +0000 UTC m=+0.566765190 container died 8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292 (image=quay.io/ceph/ceph:v18, name=heuristic_burnell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-807e1d235570b082de013a049dca71709e29e0c36a0de336cb9358b0ef4824b0-merged.mount: Deactivated successfully.
Nov 25 20:05:09 compute-0 podman[74942]: 2025-11-25 20:05:09.992217214 +0000 UTC m=+0.656848351 container remove 8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292 (image=quay.io/ceph/ceph:v18, name=heuristic_burnell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:05:10 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:05:10 compute-0 systemd[1]: libpod-conmon-8d397a7a41b51eee0c5ab819f48ad77705fc999b09153f9543c1e29e02654292.scope: Deactivated successfully.
Nov 25 20:05:10 compute-0 ceph-mon[74792]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 20:05:10 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 20:05:10 compute-0 ceph-mon[74792]: mon.compute-0@0(leader) e1 shutdown
Nov 25 20:05:10 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0[74788]: 2025-11-25T20:05:10.269+0000 7f43b9c77640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 20:05:10 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0[74788]: 2025-11-25T20:05:10.269+0000 7f43b9c77640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 20:05:10 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 20:05:10 compute-0 ceph-mon[74792]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 20:05:10 compute-0 podman[75026]: 2025-11-25 20:05:10.403718039 +0000 UTC m=+0.207951240 container died b46473b2328cf7602f3b4de685ec985d20ebcb6c26683ca0bdffa915b01cb94c (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-46aeced212b80c497cd6137af0c848a2064e182b2331f6ae0227e08f50c87539-merged.mount: Deactivated successfully.
Nov 25 20:05:10 compute-0 podman[75026]: 2025-11-25 20:05:10.441124066 +0000 UTC m=+0.245357267 container remove b46473b2328cf7602f3b4de685ec985d20ebcb6c26683ca0bdffa915b01cb94c (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:10 compute-0 bash[75026]: ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0
Nov 25 20:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:05:10 compute-0 systemd[1]: ceph-712dd110-763a-5547-8ef7-acda1414fdce@mon.compute-0.service: Deactivated successfully.
Nov 25 20:05:10 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:05:10 compute-0 systemd[1]: ceph-712dd110-763a-5547-8ef7-acda1414fdce@mon.compute-0.service: Consumed 1.239s CPU time.
Nov 25 20:05:10 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:05:10 compute-0 podman[75125]: 2025-11-25 20:05:10.9099368 +0000 UTC m=+0.054929925 container create 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f3ae3c7dbcbae45bcd59af2052b4b8abb3181df48abe5e2e3fc0b2ab5c8eae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f3ae3c7dbcbae45bcd59af2052b4b8abb3181df48abe5e2e3fc0b2ab5c8eae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f3ae3c7dbcbae45bcd59af2052b4b8abb3181df48abe5e2e3fc0b2ab5c8eae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f3ae3c7dbcbae45bcd59af2052b4b8abb3181df48abe5e2e3fc0b2ab5c8eae/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:10 compute-0 podman[75125]: 2025-11-25 20:05:10.879737089 +0000 UTC m=+0.024730254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:10 compute-0 podman[75125]: 2025-11-25 20:05:10.986155774 +0000 UTC m=+0.131148889 container init 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:05:10 compute-0 podman[75125]: 2025-11-25 20:05:10.996391962 +0000 UTC m=+0.141385057 container start 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:10 compute-0 bash[75125]: 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a
Nov 25 20:05:11 compute-0 systemd[1]: Started Ceph mon.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:05:11 compute-0 ceph-mon[75144]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: pidfile_write: ignore empty --pid-file
Nov 25 20:05:11 compute-0 ceph-mon[75144]: load: jerasure load: lrc 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Git sha 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: DB SUMMARY
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: DB Session ID:  BBUKM01M1VKNQ9NGVXH7
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54560 ; 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                                     Options.env: 0x5585aaaebc40
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                                Options.info_log: 0x5585aba0b040
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                                 Options.wal_dir: 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                    Options.write_buffer_manager: 0x5585aba1ab40
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                               Options.row_cache: None
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                              Options.wal_filter: None
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.wal_compression: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.max_background_jobs: 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Compression algorithms supported:
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kZSTD supported: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:           Options.merge_operator: 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:        Options.compaction_filter: None
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5585aba0ac40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5585aba031f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.compression: NoCompression
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.num_levels: 7
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e268f949-cc37-4e61-bd9c-5215f99d2d7b
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101111066856, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101111070882, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52691, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50293, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101111, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101111071118, "job": 1, "event": "recovery_finished"}
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5585aba2ce00
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: DB pointer 0x5585abab6000
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:05:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.61 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.61 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585aba031f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:05:11 compute-0 podman[75145]: 2025-11-25 20:05:11.084624573 +0000 UTC m=+0.052664944 container create 66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0 (image=quay.io/ceph/ceph:v18, name=zen_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???) e1 preinit fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).mds e1 new map
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 20:05:11 compute-0 ceph-mon[75144]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 20:05:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 20:05:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 20:05:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 20:05:11 compute-0 systemd[1]: Started libpod-conmon-66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0.scope.
Nov 25 20:05:11 compute-0 podman[75145]: 2025-11-25 20:05:11.061571666 +0000 UTC m=+0.029612047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 20:05:11 compute-0 ceph-mon[75144]: fsmap 
Nov 25 20:05:11 compute-0 ceph-mon[75144]: osdmap e1: 0 total, 0 up, 0 in
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mgrmap e1: no daemons active
Nov 25 20:05:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3c9dddff78c1043514887d2ce77a965f5f81523202ad65744d1b9cbe1c9168/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3c9dddff78c1043514887d2ce77a965f5f81523202ad65744d1b9cbe1c9168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3c9dddff78c1043514887d2ce77a965f5f81523202ad65744d1b9cbe1c9168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:11 compute-0 podman[75145]: 2025-11-25 20:05:11.212054859 +0000 UTC m=+0.180095290 container init 66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0 (image=quay.io/ceph/ceph:v18, name=zen_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:05:11 compute-0 podman[75145]: 2025-11-25 20:05:11.222980176 +0000 UTC m=+0.191020527 container start 66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0 (image=quay.io/ceph/ceph:v18, name=zen_shockley, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:11 compute-0 podman[75145]: 2025-11-25 20:05:11.22715998 +0000 UTC m=+0.195200432 container attach 66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0 (image=quay.io/ceph/ceph:v18, name=zen_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:05:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 25 20:05:11 compute-0 systemd[1]: libpod-66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0.scope: Deactivated successfully.
Nov 25 20:05:11 compute-0 podman[75226]: 2025-11-25 20:05:11.717455309 +0000 UTC m=+0.040160094 container died 66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0 (image=quay.io/ceph/ceph:v18, name=zen_shockley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3c9dddff78c1043514887d2ce77a965f5f81523202ad65744d1b9cbe1c9168-merged.mount: Deactivated successfully.
Nov 25 20:05:11 compute-0 podman[75226]: 2025-11-25 20:05:11.766073581 +0000 UTC m=+0.088778336 container remove 66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0 (image=quay.io/ceph/ceph:v18, name=zen_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:05:11 compute-0 systemd[1]: libpod-conmon-66a78dd321ed89d581190b3c6e6acc2242f7c30026fed40d2d5b9789fd8850d0.scope: Deactivated successfully.
Nov 25 20:05:11 compute-0 podman[75241]: 2025-11-25 20:05:11.867717486 +0000 UTC m=+0.062036698 container create 5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737 (image=quay.io/ceph/ceph:v18, name=elegant_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:11 compute-0 systemd[1]: Started libpod-conmon-5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737.scope.
Nov 25 20:05:11 compute-0 podman[75241]: 2025-11-25 20:05:11.83993978 +0000 UTC m=+0.034259032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17647a5e852aa00ee63eb3198e46914e3707372be3bebfcc8da59ce8a3cfac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17647a5e852aa00ee63eb3198e46914e3707372be3bebfcc8da59ce8a3cfac0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17647a5e852aa00ee63eb3198e46914e3707372be3bebfcc8da59ce8a3cfac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:11 compute-0 podman[75241]: 2025-11-25 20:05:11.970473202 +0000 UTC m=+0.164792404 container init 5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737 (image=quay.io/ceph/ceph:v18, name=elegant_kowalevski, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:05:11 compute-0 podman[75241]: 2025-11-25 20:05:11.977285777 +0000 UTC m=+0.171604989 container start 5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737 (image=quay.io/ceph/ceph:v18, name=elegant_kowalevski, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:05:11 compute-0 podman[75241]: 2025-11-25 20:05:11.981704538 +0000 UTC m=+0.176023730 container attach 5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737 (image=quay.io/ceph/ceph:v18, name=elegant_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 25 20:05:12 compute-0 systemd[1]: libpod-5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737.scope: Deactivated successfully.
Nov 25 20:05:12 compute-0 podman[75241]: 2025-11-25 20:05:12.401861398 +0000 UTC m=+0.596180610 container died 5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737 (image=quay.io/ceph/ceph:v18, name=elegant_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17647a5e852aa00ee63eb3198e46914e3707372be3bebfcc8da59ce8a3cfac0-merged.mount: Deactivated successfully.
Nov 25 20:05:12 compute-0 podman[75241]: 2025-11-25 20:05:12.457301856 +0000 UTC m=+0.651621028 container remove 5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737 (image=quay.io/ceph/ceph:v18, name=elegant_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:12 compute-0 systemd[1]: libpod-conmon-5ceb8ece5f087d6a9f60d4289aeac49b8b9f23a21afe321cc041ffc5cd0d6737.scope: Deactivated successfully.
Nov 25 20:05:12 compute-0 systemd[1]: Reloading.
Nov 25 20:05:12 compute-0 systemd-rc-local-generator[75324]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:12 compute-0 systemd-sysv-generator[75327]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:12 compute-0 systemd[1]: Reloading.
Nov 25 20:05:12 compute-0 systemd-rc-local-generator[75365]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:12 compute-0 systemd-sysv-generator[75368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:13 compute-0 systemd[1]: Starting Ceph mgr.compute-0.hdjasd for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:05:13 compute-0 podman[75424]: 2025-11-25 20:05:13.351791671 +0000 UTC m=+0.061887325 container create b3ee4d5e017818d99b3874b792109dfeb8da098590c166bf6b3f4e15f218486a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5259e62b004a9f396503830bb52a941cb792b47a882c69e763afb254007cc3a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5259e62b004a9f396503830bb52a941cb792b47a882c69e763afb254007cc3a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5259e62b004a9f396503830bb52a941cb792b47a882c69e763afb254007cc3a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5259e62b004a9f396503830bb52a941cb792b47a882c69e763afb254007cc3a8/merged/var/lib/ceph/mgr/ceph-compute-0.hdjasd supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 podman[75424]: 2025-11-25 20:05:13.327963703 +0000 UTC m=+0.038059377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:13 compute-0 podman[75424]: 2025-11-25 20:05:13.429473694 +0000 UTC m=+0.139569398 container init b3ee4d5e017818d99b3874b792109dfeb8da098590c166bf6b3f4e15f218486a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:13 compute-0 podman[75424]: 2025-11-25 20:05:13.437743819 +0000 UTC m=+0.147839483 container start b3ee4d5e017818d99b3874b792109dfeb8da098590c166bf6b3f4e15f218486a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:13 compute-0 bash[75424]: b3ee4d5e017818d99b3874b792109dfeb8da098590c166bf6b3f4e15f218486a
Nov 25 20:05:13 compute-0 systemd[1]: Started Ceph mgr.compute-0.hdjasd for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:05:13 compute-0 ceph-mgr[75443]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:05:13 compute-0 ceph-mgr[75443]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 20:05:13 compute-0 ceph-mgr[75443]: pidfile_write: ignore empty --pid-file
Nov 25 20:05:13 compute-0 podman[75444]: 2025-11-25 20:05:13.536096435 +0000 UTC m=+0.049737244 container create f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8 (image=quay.io/ceph/ceph:v18, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 20:05:13 compute-0 systemd[1]: Started libpod-conmon-f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8.scope.
Nov 25 20:05:13 compute-0 podman[75444]: 2025-11-25 20:05:13.515408691 +0000 UTC m=+0.029049500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772d5a3c13a5dccb60f6ca6be27b7ef44c73cdd47ead0910df6637b0655cf55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772d5a3c13a5dccb60f6ca6be27b7ef44c73cdd47ead0910df6637b0655cf55/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772d5a3c13a5dccb60f6ca6be27b7ef44c73cdd47ead0910df6637b0655cf55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:13 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'alerts'
Nov 25 20:05:13 compute-0 podman[75444]: 2025-11-25 20:05:13.639422946 +0000 UTC m=+0.153063765 container init f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8 (image=quay.io/ceph/ceph:v18, name=stoic_bartik, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:05:13 compute-0 podman[75444]: 2025-11-25 20:05:13.650496867 +0000 UTC m=+0.164137646 container start f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8 (image=quay.io/ceph/ceph:v18, name=stoic_bartik, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:05:13 compute-0 podman[75444]: 2025-11-25 20:05:13.654524447 +0000 UTC m=+0.168165226 container attach f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8 (image=quay.io/ceph/ceph:v18, name=stoic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:05:13 compute-0 ceph-mgr[75443]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 20:05:13 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'balancer'
Nov 25 20:05:13 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:13.901+0000 7f719841c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 20:05:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270060621' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:14 compute-0 stoic_bartik[75484]: 
Nov 25 20:05:14 compute-0 stoic_bartik[75484]: {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "health": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "status": "HEALTH_OK",
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "checks": {},
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "mutes": []
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "election_epoch": 5,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "quorum": [
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         0
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     ],
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "quorum_names": [
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "compute-0"
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     ],
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "quorum_age": 2,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "monmap": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "epoch": 1,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "min_mon_release_name": "reef",
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_mons": 1
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "osdmap": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "epoch": 1,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_osds": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_up_osds": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "osd_up_since": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_in_osds": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "osd_in_since": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_remapped_pgs": 0
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "pgmap": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "pgs_by_state": [],
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_pgs": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_pools": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_objects": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "data_bytes": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "bytes_used": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "bytes_avail": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "bytes_total": 0
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "fsmap": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "epoch": 1,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "by_rank": [],
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "up:standby": 0
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "mgrmap": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "available": false,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "num_standbys": 0,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "modules": [
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:             "iostat",
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:             "nfs",
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:             "restful"
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         ],
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "services": {}
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "servicemap": {
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "epoch": 1,
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:         "services": {}
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     },
Nov 25 20:05:14 compute-0 stoic_bartik[75484]:     "progress_events": {}
Nov 25 20:05:14 compute-0 stoic_bartik[75484]: }
Nov 25 20:05:14 compute-0 systemd[1]: libpod-f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8.scope: Deactivated successfully.
Nov 25 20:05:14 compute-0 podman[75444]: 2025-11-25 20:05:14.045201925 +0000 UTC m=+0.558842704 container died f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8 (image=quay.io/ceph/ceph:v18, name=stoic_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1772d5a3c13a5dccb60f6ca6be27b7ef44c73cdd47ead0910df6637b0655cf55-merged.mount: Deactivated successfully.
Nov 25 20:05:14 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/270060621' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:14 compute-0 podman[75444]: 2025-11-25 20:05:14.09608874 +0000 UTC m=+0.609729549 container remove f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8 (image=quay.io/ceph/ceph:v18, name=stoic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:05:14 compute-0 systemd[1]: libpod-conmon-f97d9d285cacb8884b2bbd3ef06d87220b4798a81772f5cb3355feecf03259d8.scope: Deactivated successfully.
Nov 25 20:05:14 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:14.148+0000 7f719841c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 20:05:14 compute-0 ceph-mgr[75443]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 20:05:14 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'cephadm'
Nov 25 20:05:15 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'crash'
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.208733164 +0000 UTC m=+0.076131432 container create b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67 (image=quay.io/ceph/ceph:v18, name=modest_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:16 compute-0 systemd[1]: Started libpod-conmon-b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67.scope.
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.175859659 +0000 UTC m=+0.043257967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:16 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:16.267+0000 7f719841c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 20:05:16 compute-0 ceph-mgr[75443]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 20:05:16 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'dashboard'
Nov 25 20:05:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7148f101db1351994f27ea6163b99fd475cd863ff914c579d6b74bf580e80d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7148f101db1351994f27ea6163b99fd475cd863ff914c579d6b74bf580e80d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7148f101db1351994f27ea6163b99fd475cd863ff914c579d6b74bf580e80d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.316097845 +0000 UTC m=+0.183496173 container init b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67 (image=quay.io/ceph/ceph:v18, name=modest_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.328166554 +0000 UTC m=+0.195564822 container start b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67 (image=quay.io/ceph/ceph:v18, name=modest_gauss, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.332566603 +0000 UTC m=+0.199964921 container attach b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67 (image=quay.io/ceph/ceph:v18, name=modest_gauss, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:16 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327601528' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:16 compute-0 modest_gauss[75551]: 
Nov 25 20:05:16 compute-0 modest_gauss[75551]: {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "health": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "status": "HEALTH_OK",
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "checks": {},
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "mutes": []
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "election_epoch": 5,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "quorum": [
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         0
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     ],
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "quorum_names": [
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "compute-0"
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     ],
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "quorum_age": 5,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "monmap": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "epoch": 1,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "min_mon_release_name": "reef",
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_mons": 1
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "osdmap": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "epoch": 1,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_osds": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_up_osds": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "osd_up_since": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_in_osds": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "osd_in_since": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_remapped_pgs": 0
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "pgmap": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "pgs_by_state": [],
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_pgs": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_pools": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_objects": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "data_bytes": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "bytes_used": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "bytes_avail": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "bytes_total": 0
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "fsmap": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "epoch": 1,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "by_rank": [],
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "up:standby": 0
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "mgrmap": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "available": false,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "num_standbys": 0,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "modules": [
Nov 25 20:05:16 compute-0 modest_gauss[75551]:             "iostat",
Nov 25 20:05:16 compute-0 modest_gauss[75551]:             "nfs",
Nov 25 20:05:16 compute-0 modest_gauss[75551]:             "restful"
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         ],
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "services": {}
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "servicemap": {
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "epoch": 1,
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:16 compute-0 modest_gauss[75551]:         "services": {}
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     },
Nov 25 20:05:16 compute-0 modest_gauss[75551]:     "progress_events": {}
Nov 25 20:05:16 compute-0 modest_gauss[75551]: }
Nov 25 20:05:16 compute-0 systemd[1]: libpod-b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67.scope: Deactivated successfully.
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.728946707 +0000 UTC m=+0.596344975 container died b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67 (image=quay.io/ceph/ceph:v18, name=modest_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7148f101db1351994f27ea6163b99fd475cd863ff914c579d6b74bf580e80d-merged.mount: Deactivated successfully.
Nov 25 20:05:16 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1327601528' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:16 compute-0 podman[75535]: 2025-11-25 20:05:16.786257835 +0000 UTC m=+0.653656144 container remove b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67 (image=quay.io/ceph/ceph:v18, name=modest_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:05:16 compute-0 systemd[1]: libpod-conmon-b13d7ec842fc0ebc0490e94bc6ac5e438e6985c68ee583f3d7971fd0173cdd67.scope: Deactivated successfully.
Nov 25 20:05:17 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'devicehealth'
Nov 25 20:05:17 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:17.847+0000 7f719841c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 20:05:17 compute-0 ceph-mgr[75443]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 20:05:17 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 20:05:18 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 20:05:18 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 20:05:18 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]:   from numpy import show_config as show_numpy_config
Nov 25 20:05:18 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:18.356+0000 7f719841c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 20:05:18 compute-0 ceph-mgr[75443]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 20:05:18 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'influx'
Nov 25 20:05:18 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:18.589+0000 7f719841c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 20:05:18 compute-0 ceph-mgr[75443]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 20:05:18 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'insights'
Nov 25 20:05:18 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'iostat'
Nov 25 20:05:18 compute-0 podman[75589]: 2025-11-25 20:05:18.877641222 +0000 UTC m=+0.057955358 container create 943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a (image=quay.io/ceph/ceph:v18, name=upbeat_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:05:18 compute-0 systemd[1]: Started libpod-conmon-943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a.scope.
Nov 25 20:05:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308a0d1bf168bde69a2f53192d6cc8c313463a1820f0e72bb2157345f9a4792a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308a0d1bf168bde69a2f53192d6cc8c313463a1820f0e72bb2157345f9a4792a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308a0d1bf168bde69a2f53192d6cc8c313463a1820f0e72bb2157345f9a4792a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:18 compute-0 podman[75589]: 2025-11-25 20:05:18.858062849 +0000 UTC m=+0.038376965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:18 compute-0 podman[75589]: 2025-11-25 20:05:18.986254716 +0000 UTC m=+0.166568862 container init 943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a (image=quay.io/ceph/ceph:v18, name=upbeat_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 25 20:05:18 compute-0 podman[75589]: 2025-11-25 20:05:18.99373777 +0000 UTC m=+0.174051876 container start 943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a (image=quay.io/ceph/ceph:v18, name=upbeat_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:05:18 compute-0 podman[75589]: 2025-11-25 20:05:18.999175888 +0000 UTC m=+0.179490024 container attach 943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a (image=quay.io/ceph/ceph:v18, name=upbeat_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:19 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:19.070+0000 7f719841c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 20:05:19 compute-0 ceph-mgr[75443]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 20:05:19 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'k8sevents'
Nov 25 20:05:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426116819' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]: 
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]: {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "health": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "status": "HEALTH_OK",
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "checks": {},
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "mutes": []
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "election_epoch": 5,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "quorum": [
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         0
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     ],
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "quorum_names": [
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "compute-0"
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     ],
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "quorum_age": 8,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "monmap": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "epoch": 1,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "min_mon_release_name": "reef",
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_mons": 1
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "osdmap": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "epoch": 1,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_osds": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_up_osds": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "osd_up_since": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_in_osds": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "osd_in_since": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_remapped_pgs": 0
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "pgmap": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "pgs_by_state": [],
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_pgs": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_pools": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_objects": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "data_bytes": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "bytes_used": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "bytes_avail": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "bytes_total": 0
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "fsmap": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "epoch": 1,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "by_rank": [],
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "up:standby": 0
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "mgrmap": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "available": false,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "num_standbys": 0,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "modules": [
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:             "iostat",
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:             "nfs",
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:             "restful"
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         ],
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "services": {}
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "servicemap": {
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "epoch": 1,
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:         "services": {}
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     },
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]:     "progress_events": {}
Nov 25 20:05:19 compute-0 upbeat_rhodes[75606]: }
Nov 25 20:05:19 compute-0 systemd[1]: libpod-943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a.scope: Deactivated successfully.
Nov 25 20:05:19 compute-0 podman[75589]: 2025-11-25 20:05:19.381924781 +0000 UTC m=+0.562238887 container died 943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a (image=quay.io/ceph/ceph:v18, name=upbeat_rhodes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-308a0d1bf168bde69a2f53192d6cc8c313463a1820f0e72bb2157345f9a4792a-merged.mount: Deactivated successfully.
Nov 25 20:05:19 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3426116819' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:19 compute-0 podman[75589]: 2025-11-25 20:05:19.425560058 +0000 UTC m=+0.605874164 container remove 943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a (image=quay.io/ceph/ceph:v18, name=upbeat_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:05:19 compute-0 systemd[1]: libpod-conmon-943296bbda244fb16ce6142998fd39aa40de781919c4c6a576827ff343fea19a.scope: Deactivated successfully.
Nov 25 20:05:20 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'localpool'
Nov 25 20:05:21 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 20:05:21 compute-0 podman[75643]: 2025-11-25 20:05:21.521879328 +0000 UTC m=+0.062868061 container create d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6 (image=quay.io/ceph/ceph:v18, name=cranky_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:21 compute-0 systemd[1]: Started libpod-conmon-d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6.scope.
Nov 25 20:05:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:21 compute-0 podman[75643]: 2025-11-25 20:05:21.496518648 +0000 UTC m=+0.037507391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e851ca6924829a5d54c8c19e2d9fc238e806a262d22b39a8fb2105b6eae3fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e851ca6924829a5d54c8c19e2d9fc238e806a262d22b39a8fb2105b6eae3fa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e851ca6924829a5d54c8c19e2d9fc238e806a262d22b39a8fb2105b6eae3fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:21 compute-0 podman[75643]: 2025-11-25 20:05:21.626649309 +0000 UTC m=+0.167638102 container init d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6 (image=quay.io/ceph/ceph:v18, name=cranky_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 20:05:21 compute-0 podman[75643]: 2025-11-25 20:05:21.636175558 +0000 UTC m=+0.177164251 container start d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6 (image=quay.io/ceph/ceph:v18, name=cranky_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:21 compute-0 podman[75643]: 2025-11-25 20:05:21.639509458 +0000 UTC m=+0.180498191 container attach d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6 (image=quay.io/ceph/ceph:v18, name=cranky_elgamal, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:05:21 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'mirroring'
Nov 25 20:05:21 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'nfs'
Nov 25 20:05:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:21 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621350425' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]: 
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]: {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "health": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "status": "HEALTH_OK",
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "checks": {},
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "mutes": []
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "election_epoch": 5,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "quorum": [
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         0
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     ],
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "quorum_names": [
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "compute-0"
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     ],
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "quorum_age": 10,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "monmap": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "epoch": 1,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "min_mon_release_name": "reef",
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_mons": 1
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "osdmap": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "epoch": 1,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_osds": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_up_osds": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "osd_up_since": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_in_osds": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "osd_in_since": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_remapped_pgs": 0
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "pgmap": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "pgs_by_state": [],
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_pgs": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_pools": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_objects": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "data_bytes": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "bytes_used": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "bytes_avail": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "bytes_total": 0
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "fsmap": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "epoch": 1,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "by_rank": [],
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "up:standby": 0
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "mgrmap": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "available": false,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "num_standbys": 0,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "modules": [
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:             "iostat",
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:             "nfs",
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:             "restful"
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         ],
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "services": {}
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "servicemap": {
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "epoch": 1,
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:         "services": {}
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     },
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]:     "progress_events": {}
Nov 25 20:05:21 compute-0 cranky_elgamal[75661]: }
Nov 25 20:05:22 compute-0 systemd[1]: libpod-d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6.scope: Deactivated successfully.
Nov 25 20:05:22 compute-0 podman[75643]: 2025-11-25 20:05:22.014337655 +0000 UTC m=+0.555326368 container died d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6 (image=quay.io/ceph/ceph:v18, name=cranky_elgamal, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0e851ca6924829a5d54c8c19e2d9fc238e806a262d22b39a8fb2105b6eae3fa-merged.mount: Deactivated successfully.
Nov 25 20:05:22 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3621350425' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:22 compute-0 podman[75643]: 2025-11-25 20:05:22.06593404 +0000 UTC m=+0.606922743 container remove d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6 (image=quay.io/ceph/ceph:v18, name=cranky_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:22 compute-0 systemd[1]: libpod-conmon-d12cd8ed267e392922e890f4af7d5dd4b2b07906cb4c9139c62d53a972d6c7b6.scope: Deactivated successfully.
Nov 25 20:05:22 compute-0 ceph-mgr[75443]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 20:05:22 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:22.581+0000 7f719841c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 20:05:22 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'orchestrator'
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 20:05:23 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:23.206+0000 7f719841c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'osd_support'
Nov 25 20:05:23 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:23.462+0000 7f719841c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 20:05:23 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:23.695+0000 7f719841c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 20:05:23 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'progress'
Nov 25 20:05:23 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:23.964+0000 7f719841c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.18135709 +0000 UTC m=+0.081754165 container create f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b (image=quay.io/ceph/ceph:v18, name=nervous_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:24 compute-0 ceph-mgr[75443]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 20:05:24 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'prometheus'
Nov 25 20:05:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:24.215+0000 7f719841c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 20:05:24 compute-0 systemd[1]: Started libpod-conmon-f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b.scope.
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.152058792 +0000 UTC m=+0.052455967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5b4f7aa8d05d0f75b09641eb2255d9267d49eb10b375174ea085e0efaced90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5b4f7aa8d05d0f75b09641eb2255d9267d49eb10b375174ea085e0efaced90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5b4f7aa8d05d0f75b09641eb2255d9267d49eb10b375174ea085e0efaced90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.305567459 +0000 UTC m=+0.205964614 container init f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b (image=quay.io/ceph/ceph:v18, name=nervous_tu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.317551124 +0000 UTC m=+0.217948209 container start f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b (image=quay.io/ceph/ceph:v18, name=nervous_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.320910806 +0000 UTC m=+0.221307881 container attach f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b (image=quay.io/ceph/ceph:v18, name=nervous_tu, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965391026' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:24 compute-0 nervous_tu[75718]: 
Nov 25 20:05:24 compute-0 nervous_tu[75718]: {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "health": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "status": "HEALTH_OK",
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "checks": {},
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "mutes": []
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "election_epoch": 5,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "quorum": [
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         0
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     ],
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "quorum_names": [
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "compute-0"
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     ],
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "quorum_age": 13,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "monmap": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "epoch": 1,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "min_mon_release_name": "reef",
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_mons": 1
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "osdmap": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "epoch": 1,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_osds": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_up_osds": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "osd_up_since": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_in_osds": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "osd_in_since": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_remapped_pgs": 0
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "pgmap": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "pgs_by_state": [],
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_pgs": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_pools": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_objects": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "data_bytes": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "bytes_used": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "bytes_avail": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "bytes_total": 0
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "fsmap": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "epoch": 1,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "by_rank": [],
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "up:standby": 0
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "mgrmap": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "available": false,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "num_standbys": 0,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "modules": [
Nov 25 20:05:24 compute-0 nervous_tu[75718]:             "iostat",
Nov 25 20:05:24 compute-0 nervous_tu[75718]:             "nfs",
Nov 25 20:05:24 compute-0 nervous_tu[75718]:             "restful"
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         ],
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "services": {}
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "servicemap": {
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "epoch": 1,
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:24 compute-0 nervous_tu[75718]:         "services": {}
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     },
Nov 25 20:05:24 compute-0 nervous_tu[75718]:     "progress_events": {}
Nov 25 20:05:24 compute-0 nervous_tu[75718]: }
Nov 25 20:05:24 compute-0 systemd[1]: libpod-f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b.scope: Deactivated successfully.
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.754239245 +0000 UTC m=+0.654636370 container died f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b (image=quay.io/ceph/ceph:v18, name=nervous_tu, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a5b4f7aa8d05d0f75b09641eb2255d9267d49eb10b375174ea085e0efaced90-merged.mount: Deactivated successfully.
Nov 25 20:05:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/965391026' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:24 compute-0 podman[75702]: 2025-11-25 20:05:24.81620213 +0000 UTC m=+0.716599235 container remove f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b (image=quay.io/ceph/ceph:v18, name=nervous_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:24 compute-0 systemd[1]: libpod-conmon-f2de29aeccdcb6f03a40bb8f5e171f57c17a4b6b7264b39f1264a1eeb41a744b.scope: Deactivated successfully.
Nov 25 20:05:25 compute-0 ceph-mgr[75443]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 20:05:25 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'rbd_support'
Nov 25 20:05:25 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:25.202+0000 7f719841c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 20:05:25 compute-0 ceph-mgr[75443]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 20:05:25 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'restful'
Nov 25 20:05:25 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:25.507+0000 7f719841c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 20:05:26 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'rgw'
Nov 25 20:05:26 compute-0 ceph-mgr[75443]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 20:05:26 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'rook'
Nov 25 20:05:26 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:26.902+0000 7f719841c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 20:05:26 compute-0 podman[75758]: 2025-11-25 20:05:26.925524225 +0000 UTC m=+0.071189777 container create 039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c (image=quay.io/ceph/ceph:v18, name=charming_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:26 compute-0 systemd[1]: Started libpod-conmon-039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c.scope.
Nov 25 20:05:26 compute-0 podman[75758]: 2025-11-25 20:05:26.897219775 +0000 UTC m=+0.042885327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488c01c8f8cf364b0bc377c0c8567e554644186ed65b3faa08d5b9518255dc3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488c01c8f8cf364b0bc377c0c8567e554644186ed65b3faa08d5b9518255dc3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488c01c8f8cf364b0bc377c0c8567e554644186ed65b3faa08d5b9518255dc3a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:27 compute-0 podman[75758]: 2025-11-25 20:05:27.030850981 +0000 UTC m=+0.176516593 container init 039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c (image=quay.io/ceph/ceph:v18, name=charming_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:27 compute-0 podman[75758]: 2025-11-25 20:05:27.040664469 +0000 UTC m=+0.186330031 container start 039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c (image=quay.io/ceph/ceph:v18, name=charming_mendel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:05:27 compute-0 podman[75758]: 2025-11-25 20:05:27.044578385 +0000 UTC m=+0.190244007 container attach 039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c (image=quay.io/ceph/ceph:v18, name=charming_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:05:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:27 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425732265' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:27 compute-0 charming_mendel[75775]: 
Nov 25 20:05:27 compute-0 charming_mendel[75775]: {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "health": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "status": "HEALTH_OK",
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "checks": {},
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "mutes": []
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "election_epoch": 5,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "quorum": [
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         0
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     ],
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "quorum_names": [
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "compute-0"
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     ],
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "quorum_age": 16,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "monmap": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "epoch": 1,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "min_mon_release_name": "reef",
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_mons": 1
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "osdmap": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "epoch": 1,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_osds": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_up_osds": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "osd_up_since": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_in_osds": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "osd_in_since": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_remapped_pgs": 0
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "pgmap": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "pgs_by_state": [],
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_pgs": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_pools": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_objects": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "data_bytes": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "bytes_used": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "bytes_avail": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "bytes_total": 0
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "fsmap": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "epoch": 1,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "by_rank": [],
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "up:standby": 0
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "mgrmap": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "available": false,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "num_standbys": 0,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "modules": [
Nov 25 20:05:27 compute-0 charming_mendel[75775]:             "iostat",
Nov 25 20:05:27 compute-0 charming_mendel[75775]:             "nfs",
Nov 25 20:05:27 compute-0 charming_mendel[75775]:             "restful"
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         ],
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "services": {}
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "servicemap": {
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "epoch": 1,
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:27 compute-0 charming_mendel[75775]:         "services": {}
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     },
Nov 25 20:05:27 compute-0 charming_mendel[75775]:     "progress_events": {}
Nov 25 20:05:27 compute-0 charming_mendel[75775]: }
Nov 25 20:05:27 compute-0 systemd[1]: libpod-039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c.scope: Deactivated successfully.
Nov 25 20:05:27 compute-0 podman[75801]: 2025-11-25 20:05:27.500606602 +0000 UTC m=+0.023383898 container died 039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c (image=quay.io/ceph/ceph:v18, name=charming_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:27 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2425732265' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-488c01c8f8cf364b0bc377c0c8567e554644186ed65b3faa08d5b9518255dc3a-merged.mount: Deactivated successfully.
Nov 25 20:05:27 compute-0 podman[75801]: 2025-11-25 20:05:27.546175792 +0000 UTC m=+0.068953048 container remove 039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c (image=quay.io/ceph/ceph:v18, name=charming_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:27 compute-0 systemd[1]: libpod-conmon-039816a55d3393451246996fb58e01a309bad5b2e1c89cce59fcf533e304c27c.scope: Deactivated successfully.
Nov 25 20:05:28 compute-0 ceph-mgr[75443]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 20:05:28 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'selftest'
Nov 25 20:05:28 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:28.904+0000 7f719841c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'snap_schedule'
Nov 25 20:05:29 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:29.146+0000 7f719841c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'stats'
Nov 25 20:05:29 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:29.387+0000 7f719841c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'status'
Nov 25 20:05:29 compute-0 podman[75817]: 2025-11-25 20:05:29.646212918 +0000 UTC m=+0.056494829 container create 83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2 (image=quay.io/ceph/ceph:v18, name=infallible_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 20:05:29 compute-0 systemd[1]: Started libpod-conmon-83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2.scope.
Nov 25 20:05:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:29 compute-0 podman[75817]: 2025-11-25 20:05:29.625436012 +0000 UTC m=+0.035717993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95dd701eed053e2674f7f3cb28c55d0325a725d7666a1dbea8deb133fd3cfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95dd701eed053e2674f7f3cb28c55d0325a725d7666a1dbea8deb133fd3cfd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95dd701eed053e2674f7f3cb28c55d0325a725d7666a1dbea8deb133fd3cfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:29 compute-0 podman[75817]: 2025-11-25 20:05:29.73785903 +0000 UTC m=+0.148140941 container init 83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2 (image=quay.io/ceph/ceph:v18, name=infallible_shaw, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:29 compute-0 podman[75817]: 2025-11-25 20:05:29.747243436 +0000 UTC m=+0.157525327 container start 83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2 (image=quay.io/ceph/ceph:v18, name=infallible_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:05:29 compute-0 podman[75817]: 2025-11-25 20:05:29.7510906 +0000 UTC m=+0.161372551 container attach 83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2 (image=quay.io/ceph/ceph:v18, name=infallible_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 20:05:29 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'telegraf'
Nov 25 20:05:29 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:29.908+0000 7f719841c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 20:05:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:30 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352544943' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:30 compute-0 infallible_shaw[75833]: 
Nov 25 20:05:30 compute-0 infallible_shaw[75833]: {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "health": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "status": "HEALTH_OK",
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "checks": {},
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "mutes": []
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "election_epoch": 5,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "quorum": [
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         0
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     ],
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "quorum_names": [
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "compute-0"
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     ],
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "quorum_age": 19,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "monmap": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "epoch": 1,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "min_mon_release_name": "reef",
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_mons": 1
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "osdmap": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "epoch": 1,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_osds": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_up_osds": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "osd_up_since": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_in_osds": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "osd_in_since": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_remapped_pgs": 0
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "pgmap": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "pgs_by_state": [],
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_pgs": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_pools": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_objects": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "data_bytes": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "bytes_used": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "bytes_avail": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "bytes_total": 0
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "fsmap": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "epoch": 1,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "by_rank": [],
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "up:standby": 0
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "mgrmap": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "available": false,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "num_standbys": 0,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "modules": [
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:             "iostat",
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:             "nfs",
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:             "restful"
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         ],
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "services": {}
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "servicemap": {
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "epoch": 1,
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:         "services": {}
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     },
Nov 25 20:05:30 compute-0 infallible_shaw[75833]:     "progress_events": {}
Nov 25 20:05:30 compute-0 infallible_shaw[75833]: }
Nov 25 20:05:30 compute-0 ceph-mgr[75443]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 20:05:30 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'telemetry'
Nov 25 20:05:30 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:30.145+0000 7f719841c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 20:05:30 compute-0 systemd[1]: libpod-83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2.scope: Deactivated successfully.
Nov 25 20:05:30 compute-0 podman[75817]: 2025-11-25 20:05:30.159089471 +0000 UTC m=+0.569371362 container died 83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2 (image=quay.io/ceph/ceph:v18, name=infallible_shaw, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e95dd701eed053e2674f7f3cb28c55d0325a725d7666a1dbea8deb133fd3cfd-merged.mount: Deactivated successfully.
Nov 25 20:05:30 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1352544943' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:30 compute-0 podman[75817]: 2025-11-25 20:05:30.199531461 +0000 UTC m=+0.609813352 container remove 83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2 (image=quay.io/ceph/ceph:v18, name=infallible_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:05:30 compute-0 systemd[1]: libpod-conmon-83653519a7cc2325d17af14e8b26da7a1052362ddbaeff5a48e3358009054ad2.scope: Deactivated successfully.
Nov 25 20:05:30 compute-0 ceph-mgr[75443]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 20:05:30 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 20:05:30 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:30.781+0000 7f719841c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 20:05:31 compute-0 ceph-mgr[75443]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:31 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'volumes'
Nov 25 20:05:31 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:31.468+0000 7f719841c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'zabbix'
Nov 25 20:05:32 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:32.182+0000 7f719841c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.278561665 +0000 UTC m=+0.054059142 container create 0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a (image=quay.io/ceph/ceph:v18, name=brave_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:05:32 compute-0 systemd[1]: Started libpod-conmon-0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a.scope.
Nov 25 20:05:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f01411f458a58b33bf84ae43b4c7ed9b1f23669632efbfc1fabe2cd7e47481b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f01411f458a58b33bf84ae43b4c7ed9b1f23669632efbfc1fabe2cd7e47481b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f01411f458a58b33bf84ae43b4c7ed9b1f23669632efbfc1fabe2cd7e47481b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.257909593 +0000 UTC m=+0.033407070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.357004469 +0000 UTC m=+0.132501926 container init 0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a (image=quay.io/ceph/ceph:v18, name=brave_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.370034804 +0000 UTC m=+0.145532281 container start 0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a (image=quay.io/ceph/ceph:v18, name=brave_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.374304979 +0000 UTC m=+0.149802436 container attach 0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a (image=quay.io/ceph/ceph:v18, name=brave_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 20:05:32 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:32.433+0000 7f719841c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: ms_deliver_dispatch: unhandled message 0x5612ccc0b1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hdjasd
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr handle_mgr_map Activating!
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr handle_mgr_map I am now activating
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.hdjasd(active, starting, since 0.0135504s)
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hdjasd", "id": "compute-0.hdjasd"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hdjasd", "id": "compute-0.hdjasd"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: balancer
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [balancer INFO root] Starting
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: crash
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Manager daemon compute-0.hdjasd is now available
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:05:32
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [balancer INFO root] No pools available
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: devicehealth
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: iostat
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Starting
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: nfs
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: orchestrator
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: pg_autoscaler
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: progress
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [progress INFO root] Loading...
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [progress INFO root] No stored events to load
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [progress INFO root] Loaded [] historic events
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] recovery thread starting
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] starting setup
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: rbd_support
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: restful
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: status
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: telemetry
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [restful WARNING root] server not running: no certificate configured
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/mirror_snapshot_schedule"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/mirror_snapshot_schedule"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] PerfHandler: starting
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TaskHandler: starting
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/trash_purge_schedule"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/trash_purge_schedule"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: [rbd_support INFO root] setup complete
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: Activating manager daemon compute-0.hdjasd
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mgrmap e2: compute-0.hdjasd(active, starting, since 0.0135504s)
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hdjasd", "id": "compute-0.hdjasd"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: Manager daemon compute-0.hdjasd is now available
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/mirror_snapshot_schedule"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/trash_purge_schedule"}]: dispatch
Nov 25 20:05:32 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:32 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: volumes
Nov 25 20:05:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3509922361' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]: 
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]: {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "health": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "status": "HEALTH_OK",
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "checks": {},
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "mutes": []
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "election_epoch": 5,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "quorum": [
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         0
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     ],
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "quorum_names": [
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "compute-0"
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     ],
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "quorum_age": 21,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "monmap": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "epoch": 1,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "min_mon_release_name": "reef",
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_mons": 1
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "osdmap": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "epoch": 1,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_osds": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_up_osds": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "osd_up_since": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_in_osds": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "osd_in_since": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_remapped_pgs": 0
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "pgmap": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "pgs_by_state": [],
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_pgs": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_pools": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_objects": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "data_bytes": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "bytes_used": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "bytes_avail": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "bytes_total": 0
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "fsmap": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "epoch": 1,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "by_rank": [],
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "up:standby": 0
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "mgrmap": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "available": false,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "num_standbys": 0,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "modules": [
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:             "iostat",
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:             "nfs",
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:             "restful"
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         ],
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "services": {}
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "servicemap": {
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "epoch": 1,
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:         "services": {}
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     },
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]:     "progress_events": {}
Nov 25 20:05:32 compute-0 brave_elbakyan[75886]: }
Nov 25 20:05:32 compute-0 systemd[1]: libpod-0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a.scope: Deactivated successfully.
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.814290661 +0000 UTC m=+0.589788128 container died 0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a (image=quay.io/ceph/ceph:v18, name=brave_elbakyan, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f01411f458a58b33bf84ae43b4c7ed9b1f23669632efbfc1fabe2cd7e47481b-merged.mount: Deactivated successfully.
Nov 25 20:05:32 compute-0 podman[75870]: 2025-11-25 20:05:32.869607175 +0000 UTC m=+0.645104652 container remove 0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a (image=quay.io/ceph/ceph:v18, name=brave_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:05:32 compute-0 systemd[1]: libpod-conmon-0acf1f30b3e1be8bc97da0caef3ec00c1583e23fdfb927865af000129200147a.scope: Deactivated successfully.
Nov 25 20:05:33 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.hdjasd(active, since 1.0246s)
Nov 25 20:05:33 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:33 compute-0 ceph-mon[75144]: from='mgr.14102 192.168.122.100:0/1817223623' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:33 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3509922361' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:33 compute-0 ceph-mon[75144]: mgrmap e3: compute-0.hdjasd(active, since 1.0246s)
Nov 25 20:05:34 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:05:34 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.hdjasd(active, since 2s)
Nov 25 20:05:34 compute-0 podman[76004]: 2025-11-25 20:05:34.965942551 +0000 UTC m=+0.067081966 container create 243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394 (image=quay.io/ceph/ceph:v18, name=pedantic_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:35 compute-0 systemd[1]: Started libpod-conmon-243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394.scope.
Nov 25 20:05:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c06cea4420b66454fce0389ff741cbc12c5e87a5be0a68564db167bcc431bdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c06cea4420b66454fce0389ff741cbc12c5e87a5be0a68564db167bcc431bdf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c06cea4420b66454fce0389ff741cbc12c5e87a5be0a68564db167bcc431bdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 podman[76004]: 2025-11-25 20:05:34.937624429 +0000 UTC m=+0.038763894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:35 compute-0 podman[76004]: 2025-11-25 20:05:35.04018485 +0000 UTC m=+0.141324325 container init 243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394 (image=quay.io/ceph/ceph:v18, name=pedantic_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:05:35 compute-0 podman[76004]: 2025-11-25 20:05:35.048557178 +0000 UTC m=+0.149696553 container start 243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394 (image=quay.io/ceph/ceph:v18, name=pedantic_roentgen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:35 compute-0 podman[76004]: 2025-11-25 20:05:35.051048646 +0000 UTC m=+0.152188061 container attach 243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394 (image=quay.io/ceph/ceph:v18, name=pedantic_roentgen, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:35 compute-0 ceph-mon[75144]: mgrmap e4: compute-0.hdjasd(active, since 2s)
Nov 25 20:05:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 20:05:35 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2642634897' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]: 
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]: {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "health": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "status": "HEALTH_OK",
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "checks": {},
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "mutes": []
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "election_epoch": 5,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "quorum": [
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         0
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     ],
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "quorum_names": [
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "compute-0"
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     ],
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "quorum_age": 24,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "monmap": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "epoch": 1,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "min_mon_release_name": "reef",
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_mons": 1
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "osdmap": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "epoch": 1,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_osds": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_up_osds": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "osd_up_since": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_in_osds": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "osd_in_since": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_remapped_pgs": 0
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "pgmap": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "pgs_by_state": [],
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_pgs": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_pools": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_objects": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "data_bytes": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "bytes_used": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "bytes_avail": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "bytes_total": 0
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "fsmap": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "epoch": 1,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "by_rank": [],
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "up:standby": 0
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "mgrmap": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "available": true,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "num_standbys": 0,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "modules": [
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:             "iostat",
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:             "nfs",
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:             "restful"
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         ],
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "services": {}
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "servicemap": {
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "epoch": 1,
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "modified": "2025-11-25T20:05:07.852558+0000",
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:         "services": {}
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     },
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]:     "progress_events": {}
Nov 25 20:05:35 compute-0 pedantic_roentgen[76020]: }
Nov 25 20:05:35 compute-0 systemd[1]: libpod-243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394.scope: Deactivated successfully.
Nov 25 20:05:35 compute-0 podman[76004]: 2025-11-25 20:05:35.673576303 +0000 UTC m=+0.774715688 container died 243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394 (image=quay.io/ceph/ceph:v18, name=pedantic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c06cea4420b66454fce0389ff741cbc12c5e87a5be0a68564db167bcc431bdf-merged.mount: Deactivated successfully.
Nov 25 20:05:35 compute-0 podman[76004]: 2025-11-25 20:05:35.713588821 +0000 UTC m=+0.814728196 container remove 243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394 (image=quay.io/ceph/ceph:v18, name=pedantic_roentgen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:35 compute-0 systemd[1]: libpod-conmon-243a605d1d3c3ac039585bb5e3166a5f6661f5dfa691faeb501a1f022a2c8394.scope: Deactivated successfully.
Nov 25 20:05:35 compute-0 podman[76059]: 2025-11-25 20:05:35.769235815 +0000 UTC m=+0.037369947 container create 69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6 (image=quay.io/ceph/ceph:v18, name=pedantic_mestorf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:35 compute-0 systemd[1]: Started libpod-conmon-69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6.scope.
Nov 25 20:05:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5215a3017237f1eb695ade84eb008a8912eb660bdebbdf0b22be39171b603d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5215a3017237f1eb695ade84eb008a8912eb660bdebbdf0b22be39171b603d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5215a3017237f1eb695ade84eb008a8912eb660bdebbdf0b22be39171b603d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5215a3017237f1eb695ade84eb008a8912eb660bdebbdf0b22be39171b603d/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:35 compute-0 podman[76059]: 2025-11-25 20:05:35.842307093 +0000 UTC m=+0.110441225 container init 69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6 (image=quay.io/ceph/ceph:v18, name=pedantic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:05:35 compute-0 podman[76059]: 2025-11-25 20:05:35.754998188 +0000 UTC m=+0.023132340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:35 compute-0 podman[76059]: 2025-11-25 20:05:35.853959961 +0000 UTC m=+0.122094093 container start 69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6 (image=quay.io/ceph/ceph:v18, name=pedantic_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:35 compute-0 podman[76059]: 2025-11-25 20:05:35.857670672 +0000 UTC m=+0.125804834 container attach 69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6 (image=quay.io/ceph/ceph:v18, name=pedantic_mestorf, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 20:05:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3435007758' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 20:05:36 compute-0 systemd[1]: libpod-69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6.scope: Deactivated successfully.
Nov 25 20:05:36 compute-0 podman[76102]: 2025-11-25 20:05:36.391574217 +0000 UTC m=+0.024320252 container died 69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6 (image=quay.io/ceph/ceph:v18, name=pedantic_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:36 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:05:36 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2642634897' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 20:05:36 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3435007758' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 20:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d5215a3017237f1eb695ade84eb008a8912eb660bdebbdf0b22be39171b603d-merged.mount: Deactivated successfully.
Nov 25 20:05:36 compute-0 podman[76102]: 2025-11-25 20:05:36.917925058 +0000 UTC m=+0.550671073 container remove 69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6 (image=quay.io/ceph/ceph:v18, name=pedantic_mestorf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:05:36 compute-0 systemd[1]: libpod-conmon-69eeb07f5b7c02645c40d5b65f56ac21d2a6e78cc7969fb186c671d263af61b6.scope: Deactivated successfully.
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:37.017732554 +0000 UTC m=+0.063658773 container create 0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452 (image=quay.io/ceph/ceph:v18, name=epic_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:37 compute-0 systemd[1]: Started libpod-conmon-0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452.scope.
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:36.992307702 +0000 UTC m=+0.038233961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33459822f3f70ddd3a64eb49516d379e8f6fb80239c3ec195376976acbd2687/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33459822f3f70ddd3a64eb49516d379e8f6fb80239c3ec195376976acbd2687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33459822f3f70ddd3a64eb49516d379e8f6fb80239c3ec195376976acbd2687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:37.117114357 +0000 UTC m=+0.163040606 container init 0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452 (image=quay.io/ceph/ceph:v18, name=epic_golick, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:37.126636217 +0000 UTC m=+0.172562436 container start 0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452 (image=quay.io/ceph/ceph:v18, name=epic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:37.131205261 +0000 UTC m=+0.177131530 container attach 0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452 (image=quay.io/ceph/ceph:v18, name=epic_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:37 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 25 20:05:37 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/329201090' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 20:05:37 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/329201090' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  1: '-n'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  2: 'mgr.compute-0.hdjasd'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  3: '-f'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  4: '--setuser'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  5: 'ceph'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  6: '--setgroup'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  7: 'ceph'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 25 20:05:37 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.hdjasd(active, since 5s)
Nov 25 20:05:37 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/329201090' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 20:05:37 compute-0 systemd[1]: libpod-0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452.scope: Deactivated successfully.
Nov 25 20:05:37 compute-0 conmon[76136]: conmon 0d13ce20bf76c786bd46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452.scope/container/memory.events
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:37.776269701 +0000 UTC m=+0.822195900 container died 0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452 (image=quay.io/ceph/ceph:v18, name=epic_golick, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a33459822f3f70ddd3a64eb49516d379e8f6fb80239c3ec195376976acbd2687-merged.mount: Deactivated successfully.
Nov 25 20:05:37 compute-0 podman[76119]: 2025-11-25 20:05:37.818409688 +0000 UTC m=+0.864335867 container remove 0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452 (image=quay.io/ceph/ceph:v18, name=epic_golick, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:05:37 compute-0 systemd[1]: libpod-conmon-0d13ce20bf76c786bd462e38b7845898e1eae393f20c5934a925210c19b0b452.scope: Deactivated successfully.
Nov 25 20:05:37 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: ignoring --setuser ceph since I am not root
Nov 25 20:05:37 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: ignoring --setgroup ceph since I am not root
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: pidfile_write: ignore empty --pid-file
Nov 25 20:05:37 compute-0 podman[76176]: 2025-11-25 20:05:37.912198719 +0000 UTC m=+0.068665749 container create 68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c (image=quay.io/ceph/ceph:v18, name=crazy_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:05:37 compute-0 systemd[1]: Started libpod-conmon-68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c.scope.
Nov 25 20:05:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26df8bcde16685384473eb99b386b13a303f21b847862e46a92910bae7b9c3f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26df8bcde16685384473eb99b386b13a303f21b847862e46a92910bae7b9c3f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26df8bcde16685384473eb99b386b13a303f21b847862e46a92910bae7b9c3f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:37 compute-0 podman[76176]: 2025-11-25 20:05:37.885932845 +0000 UTC m=+0.042399925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:37 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'alerts'
Nov 25 20:05:37 compute-0 podman[76176]: 2025-11-25 20:05:37.999171575 +0000 UTC m=+0.155638665 container init 68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c (image=quay.io/ceph/ceph:v18, name=crazy_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:05:38 compute-0 podman[76176]: 2025-11-25 20:05:38.007921343 +0000 UTC m=+0.164388363 container start 68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c (image=quay.io/ceph/ceph:v18, name=crazy_allen, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:38 compute-0 podman[76176]: 2025-11-25 20:05:38.011778629 +0000 UTC m=+0.168245659 container attach 68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c (image=quay.io/ceph/ceph:v18, name=crazy_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:05:38 compute-0 ceph-mgr[75443]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 20:05:38 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'balancer'
Nov 25 20:05:38 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:38.278+0000 7f93253c6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 20:05:38 compute-0 ceph-mgr[75443]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 20:05:38 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'cephadm'
Nov 25 20:05:38 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:38.507+0000 7f93253c6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 20:05:38 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 20:05:38 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502617629' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 20:05:38 compute-0 crazy_allen[76216]: {
Nov 25 20:05:38 compute-0 crazy_allen[76216]:     "epoch": 5,
Nov 25 20:05:38 compute-0 crazy_allen[76216]:     "available": true,
Nov 25 20:05:38 compute-0 crazy_allen[76216]:     "active_name": "compute-0.hdjasd",
Nov 25 20:05:38 compute-0 crazy_allen[76216]:     "num_standby": 0
Nov 25 20:05:38 compute-0 crazy_allen[76216]: }
Nov 25 20:05:38 compute-0 systemd[1]: libpod-68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c.scope: Deactivated successfully.
Nov 25 20:05:38 compute-0 podman[76242]: 2025-11-25 20:05:38.648341017 +0000 UTC m=+0.026536202 container died 68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c (image=quay.io/ceph/ceph:v18, name=crazy_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-26df8bcde16685384473eb99b386b13a303f21b847862e46a92910bae7b9c3f7-merged.mount: Deactivated successfully.
Nov 25 20:05:38 compute-0 podman[76242]: 2025-11-25 20:05:38.687487453 +0000 UTC m=+0.065682638 container remove 68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c (image=quay.io/ceph/ceph:v18, name=crazy_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:38 compute-0 systemd[1]: libpod-conmon-68606c4ef6209463bf98c1a70593a34e51a17fcd5950ed453160ff7c8a04b80c.scope: Deactivated successfully.
Nov 25 20:05:38 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/329201090' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 20:05:38 compute-0 ceph-mon[75144]: mgrmap e5: compute-0.hdjasd(active, since 5s)
Nov 25 20:05:38 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1502617629' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 20:05:38 compute-0 podman[76257]: 2025-11-25 20:05:38.767101288 +0000 UTC m=+0.053651680 container create d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234 (image=quay.io/ceph/ceph:v18, name=magical_ritchie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:05:38 compute-0 systemd[1]: Started libpod-conmon-d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234.scope.
Nov 25 20:05:38 compute-0 podman[76257]: 2025-11-25 20:05:38.735043236 +0000 UTC m=+0.021593678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824410a773c202ea75da54475b0dde546cb006597a3b504a17b444bc8fe9e217/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824410a773c202ea75da54475b0dde546cb006597a3b504a17b444bc8fe9e217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824410a773c202ea75da54475b0dde546cb006597a3b504a17b444bc8fe9e217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:38 compute-0 podman[76257]: 2025-11-25 20:05:38.857295153 +0000 UTC m=+0.143845615 container init d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234 (image=quay.io/ceph/ceph:v18, name=magical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:05:38 compute-0 podman[76257]: 2025-11-25 20:05:38.866047001 +0000 UTC m=+0.152597373 container start d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234 (image=quay.io/ceph/ceph:v18, name=magical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:05:38 compute-0 podman[76257]: 2025-11-25 20:05:38.869857934 +0000 UTC m=+0.156408396 container attach d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234 (image=quay.io/ceph/ceph:v18, name=magical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:05:40 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'crash'
Nov 25 20:05:40 compute-0 ceph-mgr[75443]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 20:05:40 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'dashboard'
Nov 25 20:05:40 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:40.685+0000 7f93253c6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 20:05:42 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'devicehealth'
Nov 25 20:05:42 compute-0 ceph-mgr[75443]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 20:05:42 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 20:05:42 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:42.329+0000 7f93253c6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 20:05:42 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 20:05:42 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 20:05:42 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]:   from numpy import show_config as show_numpy_config
Nov 25 20:05:42 compute-0 ceph-mgr[75443]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 20:05:42 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:42.823+0000 7f93253c6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 20:05:42 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'influx'
Nov 25 20:05:43 compute-0 ceph-mgr[75443]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 20:05:43 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'insights'
Nov 25 20:05:43 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:43.043+0000 7f93253c6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 20:05:43 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'iostat'
Nov 25 20:05:43 compute-0 ceph-mgr[75443]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 20:05:43 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'k8sevents'
Nov 25 20:05:43 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:43.487+0000 7f93253c6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 20:05:45 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'localpool'
Nov 25 20:05:45 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 20:05:45 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'mirroring'
Nov 25 20:05:46 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'nfs'
Nov 25 20:05:46 compute-0 ceph-mgr[75443]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 20:05:46 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'orchestrator'
Nov 25 20:05:46 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:46.894+0000 7f93253c6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 20:05:47 compute-0 ceph-mgr[75443]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:47 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 20:05:47 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:47.543+0000 7f93253c6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:47 compute-0 ceph-mgr[75443]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 20:05:47 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'osd_support'
Nov 25 20:05:47 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:47.817+0000 7f93253c6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-mgr[75443]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 20:05:48 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:48.047+0000 7f93253c6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-mgr[75443]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'progress'
Nov 25 20:05:48 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:48.314+0000 7f93253c6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-mgr[75443]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:48.571+0000 7f93253c6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 20:05:48 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'prometheus'
Nov 25 20:05:49 compute-0 ceph-mgr[75443]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 20:05:49 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'rbd_support'
Nov 25 20:05:49 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:49.544+0000 7f93253c6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 20:05:49 compute-0 ceph-mgr[75443]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 20:05:49 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'restful'
Nov 25 20:05:49 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:49.835+0000 7f93253c6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 20:05:50 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'rgw'
Nov 25 20:05:51 compute-0 ceph-mgr[75443]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 20:05:51 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'rook'
Nov 25 20:05:51 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:51.270+0000 7f93253c6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:53.277+0000 7f93253c6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'selftest'
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'snap_schedule'
Nov 25 20:05:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:53.515+0000 7f93253c6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'stats'
Nov 25 20:05:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:53.751+0000 7f93253c6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 20:05:53 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'status'
Nov 25 20:05:54 compute-0 ceph-mgr[75443]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 20:05:54 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'telegraf'
Nov 25 20:05:54 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:54.239+0000 7f93253c6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 20:05:54 compute-0 ceph-mgr[75443]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 20:05:54 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'telemetry'
Nov 25 20:05:54 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:54.469+0000 7f93253c6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 20:05:55 compute-0 ceph-mgr[75443]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 20:05:55 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 20:05:55 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:55.067+0000 7f93253c6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 20:05:55 compute-0 ceph-mgr[75443]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:55 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'volumes'
Nov 25 20:05:55 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:55.738+0000 7f93253c6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr[py] Loading python module 'zabbix'
Nov 25 20:05:56 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:56.437+0000 7f93253c6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 20:05:56 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:05:56.677+0000 7f93253c6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hdjasd restarted
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hdjasd
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: ms_deliver_dispatch: unhandled message 0x557c1032a420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr handle_mgr_map Activating!
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr handle_mgr_map I am now activating
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.hdjasd(active, starting, since 0.0173844s)
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hdjasd", "id": "compute-0.hdjasd"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hdjasd", "id": "compute-0.hdjasd"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: balancer
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Starting
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Manager daemon compute-0.hdjasd is now available
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:05:56
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [balancer INFO root] No pools available
Nov 25 20:05:56 compute-0 ceph-mon[75144]: Active manager daemon compute-0.hdjasd restarted
Nov 25 20:05:56 compute-0 ceph-mon[75144]: Activating manager daemon compute-0.hdjasd
Nov 25 20:05:56 compute-0 ceph-mon[75144]: osdmap e2: 0 total, 0 up, 0 in
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mgrmap e6: compute-0.hdjasd(active, starting, since 0.0173844s)
Nov 25 20:05:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hdjasd", "id": "compute-0.hdjasd"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: Manager daemon compute-0.hdjasd is now available
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: cephadm
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: crash
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: devicehealth
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Starting
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: iostat
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: nfs
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: orchestrator
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: pg_autoscaler
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: progress
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [progress INFO root] Loading...
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [progress INFO root] No stored events to load
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [progress INFO root] Loaded [] historic events
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] recovery thread starting
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] starting setup
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: rbd_support
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: restful
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: status
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: telemetry
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/mirror_snapshot_schedule"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/mirror_snapshot_schedule"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [restful WARNING root] server not running: no certificate configured
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] PerfHandler: starting
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TaskHandler: starting
Nov 25 20:05:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/trash_purge_schedule"} v 0) v1
Nov 25 20:05:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/trash_purge_schedule"}]: dispatch
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] setup complete
Nov 25 20:05:56 compute-0 ceph-mgr[75443]: mgr load Constructed class from module: volumes
Nov 25 20:05:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 25 20:05:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 25 20:05:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:57 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.hdjasd(active, since 1.02639s)
Nov 25 20:05:57 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 20:05:57 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 20:05:57 compute-0 magical_ritchie[76274]: {
Nov 25 20:05:57 compute-0 magical_ritchie[76274]:     "mgrmap_epoch": 7,
Nov 25 20:05:57 compute-0 magical_ritchie[76274]:     "initialized": true
Nov 25 20:05:57 compute-0 magical_ritchie[76274]: }
Nov 25 20:05:57 compute-0 systemd[1]: libpod-d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234.scope: Deactivated successfully.
Nov 25 20:05:57 compute-0 podman[76257]: 2025-11-25 20:05:57.750332322 +0000 UTC m=+19.036882684 container died d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234 (image=quay.io/ceph/ceph:v18, name=magical_ritchie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 20:05:57 compute-0 ceph-mon[75144]: Found migration_current of "None". Setting to last migration.
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/mirror_snapshot_schedule"}]: dispatch
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hdjasd/trash_purge_schedule"}]: dispatch
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:57 compute-0 ceph-mon[75144]: mgrmap e7: compute-0.hdjasd(active, since 1.02639s)
Nov 25 20:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-824410a773c202ea75da54475b0dde546cb006597a3b504a17b444bc8fe9e217-merged.mount: Deactivated successfully.
Nov 25 20:05:57 compute-0 podman[76257]: 2025-11-25 20:05:57.802281585 +0000 UTC m=+19.088831977 container remove d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234 (image=quay.io/ceph/ceph:v18, name=magical_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:05:57 compute-0 systemd[1]: libpod-conmon-d1a198cc62609a35069a2d266645b86ad0015e25cb8e7636403e2fea8a07a234.scope: Deactivated successfully.
Nov 25 20:05:57 compute-0 podman[76433]: 2025-11-25 20:05:57.901069453 +0000 UTC m=+0.065863393 container create 978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5 (image=quay.io/ceph/ceph:v18, name=admiring_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:05:57 compute-0 systemd[1]: Started libpod-conmon-978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5.scope.
Nov 25 20:05:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b8d778983a50c112618ac6a6567f1d2f0e795b13b6686529a634b2f51b9dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b8d778983a50c112618ac6a6567f1d2f0e795b13b6686529a634b2f51b9dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b8d778983a50c112618ac6a6567f1d2f0e795b13b6686529a634b2f51b9dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:57 compute-0 podman[76433]: 2025-11-25 20:05:57.880066871 +0000 UTC m=+0.044860831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:57 compute-0 podman[76433]: 2025-11-25 20:05:57.977456461 +0000 UTC m=+0.142250481 container init 978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5 (image=quay.io/ceph/ceph:v18, name=admiring_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:05:57 compute-0 podman[76433]: 2025-11-25 20:05:57.983190647 +0000 UTC m=+0.147984577 container start 978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5 (image=quay.io/ceph/ceph:v18, name=admiring_galois, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:57 compute-0 podman[76433]: 2025-11-25 20:05:57.988413339 +0000 UTC m=+0.153207299 container attach 978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5 (image=quay.io/ceph/ceph:v18, name=admiring_galois, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: [cephadm INFO cherrypy.error] [25/Nov/2025:20:05:58] ENGINE Bus STARTING
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : [25/Nov/2025:20:05:58] ENGINE Bus STARTING
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: [cephadm INFO cherrypy.error] [25/Nov/2025:20:05:58] ENGINE Serving on https://192.168.122.100:7150
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : [25/Nov/2025:20:05:58] ENGINE Serving on https://192.168.122.100:7150
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: [cephadm INFO cherrypy.error] [25/Nov/2025:20:05:58] ENGINE Client ('192.168.122.100', 49872) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : [25/Nov/2025:20:05:58] ENGINE Client ('192.168.122.100', 49872) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:05:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 25 20:05:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 20:05:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:58 compute-0 systemd[1]: libpod-978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5.scope: Deactivated successfully.
Nov 25 20:05:58 compute-0 podman[76433]: 2025-11-25 20:05:58.553337998 +0000 UTC m=+0.718131928 container died 978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5 (image=quay.io/ceph/ceph:v18, name=admiring_galois, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-774b8d778983a50c112618ac6a6567f1d2f0e795b13b6686529a634b2f51b9dd-merged.mount: Deactivated successfully.
Nov 25 20:05:58 compute-0 podman[76433]: 2025-11-25 20:05:58.601928431 +0000 UTC m=+0.766722361 container remove 978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5 (image=quay.io/ceph/ceph:v18, name=admiring_galois, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: [cephadm INFO cherrypy.error] [25/Nov/2025:20:05:58] ENGINE Serving on http://192.168.122.100:8765
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : [25/Nov/2025:20:05:58] ENGINE Serving on http://192.168.122.100:8765
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: [cephadm INFO cherrypy.error] [25/Nov/2025:20:05:58] ENGINE Bus STARTED
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : [25/Nov/2025:20:05:58] ENGINE Bus STARTED
Nov 25 20:05:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 20:05:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:58 compute-0 systemd[1]: libpod-conmon-978088bec32831ce02b23ddfac22bbabe9308c3867afa6b171cb3524630cffb5.scope: Deactivated successfully.
Nov 25 20:05:58 compute-0 podman[76512]: 2025-11-25 20:05:58.677354463 +0000 UTC m=+0.056197150 container create fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849 (image=quay.io/ceph/ceph:v18, name=festive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:05:58 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:05:58 compute-0 systemd[1]: Started libpod-conmon-fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849.scope.
Nov 25 20:05:58 compute-0 podman[76512]: 2025-11-25 20:05:58.649560617 +0000 UTC m=+0.028403354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd49d9dbaa707ffeecdb86c925cb0d3f0f5126a77cf41b8b867adf4af8e5bf7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd49d9dbaa707ffeecdb86c925cb0d3f0f5126a77cf41b8b867adf4af8e5bf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd49d9dbaa707ffeecdb86c925cb0d3f0f5126a77cf41b8b867adf4af8e5bf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:58 compute-0 ceph-mon[75144]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 20:05:58 compute-0 ceph-mon[75144]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 20:05:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:05:58 compute-0 podman[76512]: 2025-11-25 20:05:58.780283673 +0000 UTC m=+0.159126410 container init fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849 (image=quay.io/ceph/ceph:v18, name=festive_napier, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:05:58 compute-0 podman[76512]: 2025-11-25 20:05:58.786656887 +0000 UTC m=+0.165499544 container start fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849 (image=quay.io/ceph/ceph:v18, name=festive_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 20:05:58 compute-0 podman[76512]: 2025-11-25 20:05:58.790791539 +0000 UTC m=+0.169634286 container attach fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849 (image=quay.io/ceph/ceph:v18, name=festive_napier, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:05:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 25 20:05:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: [cephadm INFO root] Set ssh ssh_user
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 25 20:05:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 25 20:05:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: [cephadm INFO root] Set ssh ssh_config
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 25 20:05:59 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 25 20:05:59 compute-0 festive_napier[76528]: ssh user set to ceph-admin. sudo will be used
Nov 25 20:05:59 compute-0 systemd[1]: libpod-fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849.scope: Deactivated successfully.
Nov 25 20:05:59 compute-0 podman[76512]: 2025-11-25 20:05:59.36533367 +0000 UTC m=+0.744176357 container died fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849 (image=quay.io/ceph/ceph:v18, name=festive_napier, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcd49d9dbaa707ffeecdb86c925cb0d3f0f5126a77cf41b8b867adf4af8e5bf7-merged.mount: Deactivated successfully.
Nov 25 20:05:59 compute-0 podman[76512]: 2025-11-25 20:05:59.438332366 +0000 UTC m=+0.817175033 container remove fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849 (image=quay.io/ceph/ceph:v18, name=festive_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:05:59 compute-0 systemd[1]: libpod-conmon-fa650d488c9881c5df7bc0df72c9d4003e417a3bcc1d761115b0d8f8c15eb849.scope: Deactivated successfully.
Nov 25 20:05:59 compute-0 podman[76570]: 2025-11-25 20:05:59.508455674 +0000 UTC m=+0.043425632 container create 268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1 (image=quay.io/ceph/ceph:v18, name=magical_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:05:59 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.hdjasd(active, since 2s)
Nov 25 20:05:59 compute-0 systemd[1]: Started libpod-conmon-268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1.scope.
Nov 25 20:05:59 compute-0 podman[76570]: 2025-11-25 20:05:59.490828425 +0000 UTC m=+0.025798413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:05:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad330a2e4d752dda2847b7f29af935857a0966392352634e121928dd7c03cbc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad330a2e4d752dda2847b7f29af935857a0966392352634e121928dd7c03cbc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad330a2e4d752dda2847b7f29af935857a0966392352634e121928dd7c03cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad330a2e4d752dda2847b7f29af935857a0966392352634e121928dd7c03cbc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad330a2e4d752dda2847b7f29af935857a0966392352634e121928dd7c03cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:05:59 compute-0 podman[76570]: 2025-11-25 20:05:59.613001049 +0000 UTC m=+0.147971077 container init 268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1 (image=quay.io/ceph/ceph:v18, name=magical_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:05:59 compute-0 podman[76570]: 2025-11-25 20:05:59.623972978 +0000 UTC m=+0.158942936 container start 268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1 (image=quay.io/ceph/ceph:v18, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:05:59 compute-0 podman[76570]: 2025-11-25 20:05:59.627438232 +0000 UTC m=+0.162408220 container attach 268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1 (image=quay.io/ceph/ceph:v18, name=magical_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:05:59 compute-0 ceph-mon[75144]: [25/Nov/2025:20:05:58] ENGINE Bus STARTING
Nov 25 20:05:59 compute-0 ceph-mon[75144]: [25/Nov/2025:20:05:58] ENGINE Serving on https://192.168.122.100:7150
Nov 25 20:05:59 compute-0 ceph-mon[75144]: [25/Nov/2025:20:05:58] ENGINE Client ('192.168.122.100', 49872) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 20:05:59 compute-0 ceph-mon[75144]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:05:59 compute-0 ceph-mon[75144]: [25/Nov/2025:20:05:58] ENGINE Serving on http://192.168.122.100:8765
Nov 25 20:05:59 compute-0 ceph-mon[75144]: [25/Nov/2025:20:05:58] ENGINE Bus STARTED
Nov 25 20:05:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:05:59 compute-0 ceph-mon[75144]: mgrmap e8: compute-0.hdjasd(active, since 2s)
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 25 20:06:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: [cephadm INFO root] Set ssh private key
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 25 20:06:00 compute-0 systemd[1]: libpod-268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1.scope: Deactivated successfully.
Nov 25 20:06:00 compute-0 podman[76570]: 2025-11-25 20:06:00.175093471 +0000 UTC m=+0.710063459 container died 268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1 (image=quay.io/ceph/ceph:v18, name=magical_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cad330a2e4d752dda2847b7f29af935857a0966392352634e121928dd7c03cbc-merged.mount: Deactivated successfully.
Nov 25 20:06:00 compute-0 podman[76570]: 2025-11-25 20:06:00.229100691 +0000 UTC m=+0.764070669 container remove 268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1 (image=quay.io/ceph/ceph:v18, name=magical_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:00 compute-0 systemd[1]: libpod-conmon-268ab89da1690255def2955ab164be11ad3a10c782f1aa51782c1f6d3ff3c6b1.scope: Deactivated successfully.
Nov 25 20:06:00 compute-0 podman[76623]: 2025-11-25 20:06:00.312307064 +0000 UTC m=+0.062746437 container create 85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6 (image=quay.io/ceph/ceph:v18, name=bold_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:00 compute-0 systemd[1]: Started libpod-conmon-85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6.scope.
Nov 25 20:06:00 compute-0 podman[76623]: 2025-11-25 20:06:00.27722397 +0000 UTC m=+0.027663393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b596bfd453dbf7e069c88b9e5e94c9b3f15700b59df690936235e2715b1259a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b596bfd453dbf7e069c88b9e5e94c9b3f15700b59df690936235e2715b1259a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b596bfd453dbf7e069c88b9e5e94c9b3f15700b59df690936235e2715b1259a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b596bfd453dbf7e069c88b9e5e94c9b3f15700b59df690936235e2715b1259a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b596bfd453dbf7e069c88b9e5e94c9b3f15700b59df690936235e2715b1259a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:00 compute-0 podman[76623]: 2025-11-25 20:06:00.39374718 +0000 UTC m=+0.144186543 container init 85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6 (image=quay.io/ceph/ceph:v18, name=bold_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:06:00 compute-0 podman[76623]: 2025-11-25 20:06:00.404290597 +0000 UTC m=+0.154729920 container start 85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6 (image=quay.io/ceph/ceph:v18, name=bold_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:00 compute-0 podman[76623]: 2025-11-25 20:06:00.407011621 +0000 UTC m=+0.157451004 container attach 85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6 (image=quay.io/ceph/ceph:v18, name=bold_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:00 compute-0 ceph-mon[75144]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:00 compute-0 ceph-mon[75144]: Set ssh ssh_user
Nov 25 20:06:00 compute-0 ceph-mon[75144]: Set ssh ssh_config
Nov 25 20:06:00 compute-0 ceph-mon[75144]: ssh user set to ceph-admin. sudo will be used
Nov 25 20:06:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 25 20:06:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 25 20:06:00 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 25 20:06:00 compute-0 systemd[1]: libpod-85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6.scope: Deactivated successfully.
Nov 25 20:06:00 compute-0 podman[76623]: 2025-11-25 20:06:00.964336494 +0000 UTC m=+0.714775867 container died 85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6 (image=quay.io/ceph/ceph:v18, name=bold_tu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b596bfd453dbf7e069c88b9e5e94c9b3f15700b59df690936235e2715b1259a-merged.mount: Deactivated successfully.
Nov 25 20:06:01 compute-0 podman[76623]: 2025-11-25 20:06:01.012122944 +0000 UTC m=+0.762562277 container remove 85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6 (image=quay.io/ceph/ceph:v18, name=bold_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:01 compute-0 systemd[1]: libpod-conmon-85c2baa9d35961430aff9a473865021ba7794da1d6a30433c72bb03a0c5a2eb6.scope: Deactivated successfully.
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.089164871 +0000 UTC m=+0.055242755 container create e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786 (image=quay.io/ceph/ceph:v18, name=affectionate_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919522 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:01 compute-0 systemd[1]: Started libpod-conmon-e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786.scope.
Nov 25 20:06:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.062261008 +0000 UTC m=+0.028338982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4251827c2da63e825d8ce0aeb1d9b6e5194234f764605b5f45666af0ee693f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4251827c2da63e825d8ce0aeb1d9b6e5194234f764605b5f45666af0ee693f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4251827c2da63e825d8ce0aeb1d9b6e5194234f764605b5f45666af0ee693f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.177614337 +0000 UTC m=+0.143692311 container init e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786 (image=quay.io/ceph/ceph:v18, name=affectionate_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.18360636 +0000 UTC m=+0.149684294 container start e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786 (image=quay.io/ceph/ceph:v18, name=affectionate_saha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.187356602 +0000 UTC m=+0.153434526 container attach e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786 (image=quay.io/ceph/ceph:v18, name=affectionate_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:01 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:01 compute-0 affectionate_saha[76695]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcQnvZhXt7Cce4rfCVpGOIHCjWqhQ3rElsgnAnQAqYg2mWgZj7AD0A+/RyWv77ojQx9FKAL5uRNvFgNAnSgnR5SNvXblpN6hrG519t80lIP1JB8pUnl73UwmJyEbg992jtRhMnhJX/soUEDZBeaDUtdKV5CY+lOxGbsBUt+YfgonIk1ng5mQZ9PJ+SUvDgjAtVf4mhIbW+vzMwnkqQmQE+t0T2HNqkKdxwdAy7gqCi04+RB3p6gYMJqJI+Ydv6JbSZFentNMRdC9BdPHU5rR5wK5k42T/Q4uA0HsUBv9l6tVqSvLsd/xeWCjZXCjW47s1vhk/+GKXPtEV9IHcXz3uN6EfGWQburBFQDCqY5jfC3BSbdO7fCJPYf1lsBDOZPyvoz10KKxvqtmFMz7XFC1YdkjJ+Zr++0TRtNGygCzORycFZNp/ywf3+VdVRyYJqER/MAV5IGiYxRQDq+gb36pMDC8B7VqeG01WzknLoO30CC19kwXE4g9x9FwSA0Rl+5l8= zuul@controller
Nov 25 20:06:01 compute-0 systemd[1]: libpod-e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786.scope: Deactivated successfully.
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.715961243 +0000 UTC m=+0.682039127 container died e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786 (image=quay.io/ceph/ceph:v18, name=affectionate_saha, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4251827c2da63e825d8ce0aeb1d9b6e5194234f764605b5f45666af0ee693f1-merged.mount: Deactivated successfully.
Nov 25 20:06:01 compute-0 podman[76679]: 2025-11-25 20:06:01.760451804 +0000 UTC m=+0.726529688 container remove e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786 (image=quay.io/ceph/ceph:v18, name=affectionate_saha, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:06:01 compute-0 systemd[1]: libpod-conmon-e4ff1b089915de11df2eef648bf3deccbadac0b2e2e3ad2d39ce8ffa0daef786.scope: Deactivated successfully.
Nov 25 20:06:01 compute-0 ceph-mon[75144]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:01 compute-0 ceph-mon[75144]: Set ssh ssh_identity_key
Nov 25 20:06:01 compute-0 ceph-mon[75144]: Set ssh private key
Nov 25 20:06:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:01 compute-0 podman[76731]: 2025-11-25 20:06:01.829852362 +0000 UTC m=+0.045300233 container create a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98 (image=quay.io/ceph/ceph:v18, name=goofy_lovelace, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:06:01 compute-0 systemd[1]: Started libpod-conmon-a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98.scope.
Nov 25 20:06:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dd4f6910e20519d1da4f20fbfb195fedef04929e475d690e95739585791f5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dd4f6910e20519d1da4f20fbfb195fedef04929e475d690e95739585791f5d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dd4f6910e20519d1da4f20fbfb195fedef04929e475d690e95739585791f5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:01 compute-0 podman[76731]: 2025-11-25 20:06:01.810789133 +0000 UTC m=+0.026237034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:01 compute-0 podman[76731]: 2025-11-25 20:06:01.912593104 +0000 UTC m=+0.128040995 container init a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98 (image=quay.io/ceph/ceph:v18, name=goofy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:06:01 compute-0 podman[76731]: 2025-11-25 20:06:01.922979026 +0000 UTC m=+0.138426927 container start a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98 (image=quay.io/ceph/ceph:v18, name=goofy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:06:01 compute-0 podman[76731]: 2025-11-25 20:06:01.926728048 +0000 UTC m=+0.142175919 container attach a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98 (image=quay.io/ceph/ceph:v18, name=goofy_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:02 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:02 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:02 compute-0 sshd-session[76773]: Accepted publickey for ceph-admin from 192.168.122.100 port 33852 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:02 compute-0 systemd-logind[789]: New session 21 of user ceph-admin.
Nov 25 20:06:02 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 20:06:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 20:06:02 compute-0 ceph-mon[75144]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:02 compute-0 ceph-mon[75144]: Set ssh ssh_identity_pub
Nov 25 20:06:02 compute-0 ceph-mon[75144]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 20:06:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 20:06:02 compute-0 systemd[76777]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:02 compute-0 sshd-session[76783]: Accepted publickey for ceph-admin from 192.168.122.100 port 33854 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:02 compute-0 systemd-logind[789]: New session 23 of user ceph-admin.
Nov 25 20:06:02 compute-0 systemd[76777]: Queued start job for default target Main User Target.
Nov 25 20:06:03 compute-0 systemd[76777]: Created slice User Application Slice.
Nov 25 20:06:03 compute-0 systemd[76777]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 20:06:03 compute-0 systemd[76777]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 20:06:03 compute-0 systemd[76777]: Reached target Paths.
Nov 25 20:06:03 compute-0 systemd[76777]: Reached target Timers.
Nov 25 20:06:03 compute-0 systemd[76777]: Starting D-Bus User Message Bus Socket...
Nov 25 20:06:03 compute-0 systemd[76777]: Starting Create User's Volatile Files and Directories...
Nov 25 20:06:03 compute-0 systemd[76777]: Listening on D-Bus User Message Bus Socket.
Nov 25 20:06:03 compute-0 systemd[76777]: Reached target Sockets.
Nov 25 20:06:03 compute-0 systemd[76777]: Finished Create User's Volatile Files and Directories.
Nov 25 20:06:03 compute-0 systemd[76777]: Reached target Basic System.
Nov 25 20:06:03 compute-0 systemd[76777]: Reached target Main User Target.
Nov 25 20:06:03 compute-0 systemd[76777]: Startup finished in 182ms.
Nov 25 20:06:03 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 20:06:03 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 20:06:03 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 20:06:03 compute-0 sshd-session[76773]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:03 compute-0 sshd-session[76783]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:03 compute-0 sudo[76798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:03 compute-0 sudo[76798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:03 compute-0 sudo[76798]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:03 compute-0 sudo[76823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:03 compute-0 sudo[76823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:03 compute-0 sudo[76823]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:03 compute-0 sshd-session[76848]: Accepted publickey for ceph-admin from 192.168.122.100 port 33856 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:03 compute-0 systemd-logind[789]: New session 24 of user ceph-admin.
Nov 25 20:06:03 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 20:06:03 compute-0 sshd-session[76848]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:03 compute-0 sudo[76852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:03 compute-0 sudo[76852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:03 compute-0 sudo[76852]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:03 compute-0 sudo[76877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 25 20:06:03 compute-0 sudo[76877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:03 compute-0 sudo[76877]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:03 compute-0 ceph-mon[75144]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:04 compute-0 sshd-session[76902]: Accepted publickey for ceph-admin from 192.168.122.100 port 33862 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:04 compute-0 systemd-logind[789]: New session 25 of user ceph-admin.
Nov 25 20:06:04 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 20:06:04 compute-0 sshd-session[76902]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:04 compute-0 sudo[76906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:04 compute-0 sudo[76906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:04 compute-0 sudo[76906]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:04 compute-0 sudo[76931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 25 20:06:04 compute-0 sudo[76931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:04 compute-0 sudo[76931]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:04 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 25 20:06:04 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 25 20:06:04 compute-0 sshd-session[76956]: Accepted publickey for ceph-admin from 192.168.122.100 port 33876 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:04 compute-0 systemd-logind[789]: New session 26 of user ceph-admin.
Nov 25 20:06:04 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 20:06:04 compute-0 sshd-session[76956]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:04 compute-0 sudo[76960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:04 compute-0 sudo[76960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:04 compute-0 sudo[76960]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:04 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:04 compute-0 sudo[76985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:04 compute-0 sudo[76985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:04 compute-0 sudo[76985]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:04 compute-0 ceph-mon[75144]: Deploying cephadm binary to compute-0
Nov 25 20:06:04 compute-0 sshd-session[77010]: Accepted publickey for ceph-admin from 192.168.122.100 port 33888 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:04 compute-0 systemd-logind[789]: New session 27 of user ceph-admin.
Nov 25 20:06:05 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 20:06:05 compute-0 sshd-session[77010]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:05 compute-0 sudo[77014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:05 compute-0 sudo[77014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:05 compute-0 sudo[77014]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:05 compute-0 sudo[77039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:05 compute-0 sudo[77039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:05 compute-0 sudo[77039]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:05 compute-0 sshd-session[77064]: Accepted publickey for ceph-admin from 192.168.122.100 port 33900 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:05 compute-0 systemd-logind[789]: New session 28 of user ceph-admin.
Nov 25 20:06:05 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 20:06:05 compute-0 sshd-session[77064]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:05 compute-0 sudo[77068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:05 compute-0 sudo[77068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:05 compute-0 sudo[77068]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:05 compute-0 sudo[77093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 25 20:06:05 compute-0 sudo[77093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:05 compute-0 sudo[77093]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:05 compute-0 sshd-session[77118]: Accepted publickey for ceph-admin from 192.168.122.100 port 33904 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:05 compute-0 systemd-logind[789]: New session 29 of user ceph-admin.
Nov 25 20:06:05 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 20:06:05 compute-0 sshd-session[77118]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052972 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:06 compute-0 sudo[77122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:06 compute-0 sudo[77122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:06 compute-0 sudo[77122]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:06 compute-0 sudo[77147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:06 compute-0 sudo[77147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:06 compute-0 sudo[77147]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:06 compute-0 sshd-session[77172]: Accepted publickey for ceph-admin from 192.168.122.100 port 33916 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:06 compute-0 systemd-logind[789]: New session 30 of user ceph-admin.
Nov 25 20:06:06 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 20:06:06 compute-0 sshd-session[77172]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:06 compute-0 sudo[77176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:06 compute-0 sudo[77176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:06 compute-0 sudo[77176]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:06 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:06 compute-0 sudo[77201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 25 20:06:06 compute-0 sudo[77201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:06 compute-0 sudo[77201]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:06 compute-0 sshd-session[77226]: Accepted publickey for ceph-admin from 192.168.122.100 port 33924 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:06 compute-0 systemd-logind[789]: New session 31 of user ceph-admin.
Nov 25 20:06:06 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 20:06:06 compute-0 sshd-session[77226]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:07 compute-0 sshd-session[77253]: Accepted publickey for ceph-admin from 192.168.122.100 port 57364 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:07 compute-0 systemd-logind[789]: New session 32 of user ceph-admin.
Nov 25 20:06:07 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 20:06:07 compute-0 sshd-session[77253]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:07 compute-0 sudo[77257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:07 compute-0 sudo[77257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:07 compute-0 sudo[77257]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:07 compute-0 sudo[77282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 25 20:06:07 compute-0 sudo[77282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:07 compute-0 sudo[77282]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:08 compute-0 sshd-session[77307]: Accepted publickey for ceph-admin from 192.168.122.100 port 57368 ssh2: RSA SHA256:vGvCQbXDDM4PPW8PtyhVUIv2zhoE/Fb/6WySXdbXADQ
Nov 25 20:06:08 compute-0 systemd-logind[789]: New session 33 of user ceph-admin.
Nov 25 20:06:08 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 20:06:08 compute-0 sshd-session[77307]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 20:06:08 compute-0 sudo[77311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:08 compute-0 sudo[77311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:08 compute-0 sudo[77311]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:08 compute-0 sudo[77336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 25 20:06:08 compute-0 sudo[77336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:08 compute-0 sudo[77336]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 20:06:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:08 compute-0 ceph-mgr[75443]: [cephadm INFO root] Added host compute-0
Nov 25 20:06:08 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 20:06:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 20:06:08 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:06:08 compute-0 goofy_lovelace[76747]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 20:06:08 compute-0 systemd[1]: libpod-a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98.scope: Deactivated successfully.
Nov 25 20:06:08 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:08 compute-0 sudo[77382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:08 compute-0 podman[77389]: 2025-11-25 20:06:08.726391215 +0000 UTC m=+0.048419218 container died a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98 (image=quay.io/ceph/ceph:v18, name=goofy_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:08 compute-0 sudo[77382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:08 compute-0 sudo[77382]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-45dd4f6910e20519d1da4f20fbfb195fedef04929e475d690e95739585791f5d-merged.mount: Deactivated successfully.
Nov 25 20:06:08 compute-0 podman[77389]: 2025-11-25 20:06:08.79090216 +0000 UTC m=+0.112930113 container remove a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98 (image=quay.io/ceph/ceph:v18, name=goofy_lovelace, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:06:08 compute-0 systemd[1]: libpod-conmon-a7e0a93e0c83a0d00cdc676dfaed08c50c2f250069ee8a440e85f45fbe9a9f98.scope: Deactivated successfully.
Nov 25 20:06:08 compute-0 sudo[77422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:08 compute-0 sudo[77422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:08 compute-0 sudo[77422]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:08 compute-0 podman[77445]: 2025-11-25 20:06:08.885213206 +0000 UTC m=+0.055574693 container create f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044 (image=quay.io/ceph/ceph:v18, name=elated_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:06:08 compute-0 systemd[1]: Started libpod-conmon-f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044.scope.
Nov 25 20:06:08 compute-0 sudo[77456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:08 compute-0 sudo[77456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:08 compute-0 sudo[77456]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d106b91f034380be7f6b5becf9d2d508e948930e754b6882029742a87dac8e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d106b91f034380be7f6b5becf9d2d508e948930e754b6882029742a87dac8e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d106b91f034380be7f6b5becf9d2d508e948930e754b6882029742a87dac8e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:08 compute-0 podman[77445]: 2025-11-25 20:06:08.865549602 +0000 UTC m=+0.035911189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:08 compute-0 podman[77445]: 2025-11-25 20:06:08.980238602 +0000 UTC m=+0.150600119 container init f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044 (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:08 compute-0 podman[77445]: 2025-11-25 20:06:08.989125304 +0000 UTC m=+0.159486791 container start f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044 (image=quay.io/ceph/ceph:v18, name=elated_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:08 compute-0 podman[77445]: 2025-11-25 20:06:08.993023339 +0000 UTC m=+0.163384866 container attach f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044 (image=quay.io/ceph/ceph:v18, name=elated_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:09 compute-0 sudo[77491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 25 20:06:09 compute-0 sudo[77491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.290998216 +0000 UTC m=+0.043588797 container create e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4 (image=quay.io/ceph/ceph:v18, name=suspicious_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:09 compute-0 systemd[1]: Started libpod-conmon-e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4.scope.
Nov 25 20:06:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.269666116 +0000 UTC m=+0.022256727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.370496359 +0000 UTC m=+0.123086970 container init e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4 (image=quay.io/ceph/ceph:v18, name=suspicious_shamir, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.376601575 +0000 UTC m=+0.129192156 container start e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4 (image=quay.io/ceph/ceph:v18, name=suspicious_shamir, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.380038668 +0000 UTC m=+0.132629259 container attach e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4 (image=quay.io/ceph/ceph:v18, name=suspicious_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:09 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:09 compute-0 ceph-mgr[75443]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 25 20:06:09 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 25 20:06:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 20:06:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:09 compute-0 elated_payne[77486]: Scheduled mon update...
Nov 25 20:06:09 compute-0 systemd[1]: libpod-f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044.scope: Deactivated successfully.
Nov 25 20:06:09 compute-0 podman[77445]: 2025-11-25 20:06:09.554346031 +0000 UTC m=+0.724707598 container died f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044 (image=quay.io/ceph/ceph:v18, name=elated_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d106b91f034380be7f6b5becf9d2d508e948930e754b6882029742a87dac8e9-merged.mount: Deactivated successfully.
Nov 25 20:06:09 compute-0 podman[77445]: 2025-11-25 20:06:09.609701047 +0000 UTC m=+0.780062534 container remove f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044 (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:09 compute-0 systemd[1]: libpod-conmon-f017eb9702872547353c9dedb55b5daa897855f68d9ca564e3012e6275c71044.scope: Deactivated successfully.
Nov 25 20:06:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:09 compute-0 ceph-mon[75144]: Added host compute-0
Nov 25 20:06:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:06:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:09 compute-0 suspicious_shamir[77579]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 20:06:09 compute-0 systemd[1]: libpod-e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4.scope: Deactivated successfully.
Nov 25 20:06:09 compute-0 podman[77598]: 2025-11-25 20:06:09.689023876 +0000 UTC m=+0.049930560 container create 97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.693884507 +0000 UTC m=+0.446475138 container died e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4 (image=quay.io/ceph/ceph:v18, name=suspicious_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:09 compute-0 systemd[1]: Started libpod-conmon-97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39.scope.
Nov 25 20:06:09 compute-0 podman[77544]: 2025-11-25 20:06:09.747256109 +0000 UTC m=+0.499846690 container remove e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4 (image=quay.io/ceph/ceph:v18, name=suspicious_shamir, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:09 compute-0 systemd[1]: libpod-conmon-e55fa92f5694a6067030e5bbd16752ccc738cfa25bdb3956e243ee5b8d1659b4.scope: Deactivated successfully.
Nov 25 20:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054212cd70326ef95d4991511c1d025b99d56e5c50ee3def9e6667c19153e7b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054212cd70326ef95d4991511c1d025b99d56e5c50ee3def9e6667c19153e7b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054212cd70326ef95d4991511c1d025b99d56e5c50ee3def9e6667c19153e7b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c36dbe7e6797772f75d86f31dc302f3671f3753cab838c53cc5f5d3ded114034-merged.mount: Deactivated successfully.
Nov 25 20:06:09 compute-0 podman[77598]: 2025-11-25 20:06:09.668522597 +0000 UTC m=+0.029429301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:09 compute-0 podman[77598]: 2025-11-25 20:06:09.77116056 +0000 UTC m=+0.132067254 container init 97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:09 compute-0 sudo[77491]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 25 20:06:09 compute-0 podman[77598]: 2025-11-25 20:06:09.783141216 +0000 UTC m=+0.144047890 container start 97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:06:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:09 compute-0 podman[77598]: 2025-11-25 20:06:09.787423213 +0000 UTC m=+0.148329887 container attach 97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:09 compute-0 sudo[77633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:09 compute-0 sudo[77633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:09 compute-0 sudo[77633]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:09 compute-0 sudo[77658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:09 compute-0 sudo[77658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:09 compute-0 sudo[77658]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:09 compute-0 sudo[77683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:09 compute-0 sudo[77683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:09 compute-0 sudo[77683]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:10 compute-0 sudo[77708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 20:06:10 compute-0 sudo[77708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:10 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:10 compute-0 ceph-mgr[75443]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 25 20:06:10 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 25 20:06:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 20:06:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:10 compute-0 quirky_thompson[77628]: Scheduled mgr update...
Nov 25 20:06:10 compute-0 sudo[77708]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:10 compute-0 systemd[1]: libpod-97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39.scope: Deactivated successfully.
Nov 25 20:06:10 compute-0 podman[77598]: 2025-11-25 20:06:10.346621956 +0000 UTC m=+0.707528630 container died 97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:06:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-054212cd70326ef95d4991511c1d025b99d56e5c50ee3def9e6667c19153e7b9-merged.mount: Deactivated successfully.
Nov 25 20:06:10 compute-0 podman[77598]: 2025-11-25 20:06:10.393836191 +0000 UTC m=+0.754742855 container remove 97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:10 compute-0 systemd[1]: libpod-conmon-97342d02235379be1a017ebfadfe865bec338d1596ee99fbe64dff9bee5a4d39.scope: Deactivated successfully.
Nov 25 20:06:10 compute-0 sudo[77777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:10 compute-0 sudo[77777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:10 compute-0 sudo[77777]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:10 compute-0 podman[77812]: 2025-11-25 20:06:10.460880735 +0000 UTC m=+0.047115373 container create fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea (image=quay.io/ceph/ceph:v18, name=wonderful_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:10 compute-0 systemd[1]: Started libpod-conmon-fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea.scope.
Nov 25 20:06:10 compute-0 sudo[77820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:10 compute-0 sudo[77820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:10 compute-0 sudo[77820]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db875c53d24db8cccc16f641a993c7ae2fc74391f6b0b5ef02aec8f7d96fd800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db875c53d24db8cccc16f641a993c7ae2fc74391f6b0b5ef02aec8f7d96fd800/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db875c53d24db8cccc16f641a993c7ae2fc74391f6b0b5ef02aec8f7d96fd800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:10 compute-0 podman[77812]: 2025-11-25 20:06:10.522226404 +0000 UTC m=+0.108461072 container init fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea (image=quay.io/ceph/ceph:v18, name=wonderful_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:06:10 compute-0 podman[77812]: 2025-11-25 20:06:10.439157964 +0000 UTC m=+0.025392682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:10 compute-0 podman[77812]: 2025-11-25 20:06:10.532970757 +0000 UTC m=+0.119205395 container start fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea (image=quay.io/ceph/ceph:v18, name=wonderful_hermann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 20:06:10 compute-0 podman[77812]: 2025-11-25 20:06:10.536196814 +0000 UTC m=+0.122431462 container attach fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea (image=quay.io/ceph/ceph:v18, name=wonderful_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:10 compute-0 sudo[77857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:10 compute-0 sudo[77857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:10 compute-0 sudo[77857]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:10 compute-0 sudo[77884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:06:10 compute-0 sudo[77884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:10 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:10 compute-0 ceph-mon[75144]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:10 compute-0 ceph-mon[75144]: Saving service mon spec with placement count:5
Nov 25 20:06:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:11 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:11 compute-0 ceph-mgr[75443]: [cephadm INFO root] Saving service crash spec with placement *
Nov 25 20:06:11 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 25 20:06:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 20:06:11 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:11 compute-0 wonderful_hermann[77853]: Scheduled crash update...
Nov 25 20:06:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:11 compute-0 systemd[1]: libpod-fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea.scope: Deactivated successfully.
Nov 25 20:06:11 compute-0 podman[77812]: 2025-11-25 20:06:11.119192036 +0000 UTC m=+0.705426714 container died fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea (image=quay.io/ceph/ceph:v18, name=wonderful_hermann, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-db875c53d24db8cccc16f641a993c7ae2fc74391f6b0b5ef02aec8f7d96fd800-merged.mount: Deactivated successfully.
Nov 25 20:06:11 compute-0 podman[77812]: 2025-11-25 20:06:11.176362801 +0000 UTC m=+0.762597449 container remove fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea (image=quay.io/ceph/ceph:v18, name=wonderful_hermann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:06:11 compute-0 systemd[1]: libpod-conmon-fcd3f7e1057ecafa6042cc186e1dfb5c5fbfb996001e6637afea8b8d61032eea.scope: Deactivated successfully.
Nov 25 20:06:11 compute-0 podman[78001]: 2025-11-25 20:06:11.241640887 +0000 UTC m=+0.093237248 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.271742636 +0000 UTC m=+0.061705600 container create fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857 (image=quay.io/ceph/ceph:v18, name=sweet_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:06:11 compute-0 systemd[1]: Started libpod-conmon-fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857.scope.
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.247225819 +0000 UTC m=+0.037188783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c02aaa1998347364e047aefc578a8cd4ae935a1c01db6e57b586ff3db38440/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c02aaa1998347364e047aefc578a8cd4ae935a1c01db6e57b586ff3db38440/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c02aaa1998347364e047aefc578a8cd4ae935a1c01db6e57b586ff3db38440/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.372485557 +0000 UTC m=+0.162448521 container init fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857 (image=quay.io/ceph/ceph:v18, name=sweet_elion, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.382982042 +0000 UTC m=+0.172945006 container start fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857 (image=quay.io/ceph/ceph:v18, name=sweet_elion, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.387111805 +0000 UTC m=+0.177074829 container attach fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857 (image=quay.io/ceph/ceph:v18, name=sweet_elion, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:11 compute-0 podman[78001]: 2025-11-25 20:06:11.563673099 +0000 UTC m=+0.415269450 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 20:06:11 compute-0 sudo[77884]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:11 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:11 compute-0 ceph-mon[75144]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:11 compute-0 ceph-mon[75144]: Saving service mgr spec with placement count:2
Nov 25 20:06:11 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:11 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:11 compute-0 sudo[78102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:11 compute-0 sudo[78102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:11 compute-0 sudo[78102]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:11 compute-0 sudo[78127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:11 compute-0 sudo[78127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:11 compute-0 sudo[78127]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 25 20:06:11 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2601804189' entity='client.admin' 
Nov 25 20:06:11 compute-0 sudo[78152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:11 compute-0 sudo[78152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:11 compute-0 systemd[1]: libpod-fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857.scope: Deactivated successfully.
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.943494162 +0000 UTC m=+0.733457106 container died fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857 (image=quay.io/ceph/ceph:v18, name=sweet_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:11 compute-0 sudo[78152]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7c02aaa1998347364e047aefc578a8cd4ae935a1c01db6e57b586ff3db38440-merged.mount: Deactivated successfully.
Nov 25 20:06:11 compute-0 podman[78021]: 2025-11-25 20:06:11.984472317 +0000 UTC m=+0.774435251 container remove fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857 (image=quay.io/ceph/ceph:v18, name=sweet_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:11 compute-0 systemd[1]: libpod-conmon-fc38c37da38c3ac35134d9974164dcdd6b00f7f959938ef2c4b7d15ac1e56857.scope: Deactivated successfully.
Nov 25 20:06:12 compute-0 sudo[78185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:06:12 compute-0 sudo[78185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.060455294 +0000 UTC m=+0.053442765 container create 0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b (image=quay.io/ceph/ceph:v18, name=hopeful_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:06:12 compute-0 systemd[1]: Started libpod-conmon-0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b.scope.
Nov 25 20:06:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.033833681 +0000 UTC m=+0.026821212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a0e84f37eb5fa996fa3ca72cf9b2574628fe39e30b15268451df5a84d96070/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a0e84f37eb5fa996fa3ca72cf9b2574628fe39e30b15268451df5a84d96070/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a0e84f37eb5fa996fa3ca72cf9b2574628fe39e30b15268451df5a84d96070/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.151727428 +0000 UTC m=+0.144714909 container init 0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b (image=quay.io/ceph/ceph:v18, name=hopeful_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.160903267 +0000 UTC m=+0.153890708 container start 0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b (image=quay.io/ceph/ceph:v18, name=hopeful_albattani, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.16469147 +0000 UTC m=+0.157678911 container attach 0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b (image=quay.io/ceph/ceph:v18, name=hopeful_albattani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:06:12 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78251 (sysctl)
Nov 25 20:06:12 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 20:06:12 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 20:06:12 compute-0 sudo[78185]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:12 compute-0 sudo[78292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:12 compute-0 sudo[78292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:12 compute-0 sudo[78292]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:12 compute-0 sudo[78317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:12 compute-0 sudo[78317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:12 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 25 20:06:12 compute-0 sudo[78317]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:12 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:12 compute-0 systemd[1]: libpod-0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b.scope: Deactivated successfully.
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.710672545 +0000 UTC m=+0.703659986 container died 0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b (image=quay.io/ceph/ceph:v18, name=hopeful_albattani, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a0e84f37eb5fa996fa3ca72cf9b2574628fe39e30b15268451df5a84d96070-merged.mount: Deactivated successfully.
Nov 25 20:06:12 compute-0 sudo[78344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:12 compute-0 sudo[78344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:12 compute-0 sudo[78344]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:12 compute-0 podman[78212]: 2025-11-25 20:06:12.759551125 +0000 UTC m=+0.752538556 container remove 0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b (image=quay.io/ceph/ceph:v18, name=hopeful_albattani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:12 compute-0 systemd[1]: libpod-conmon-0eaa5e028dab646bed5bc77db5ef43c92571f2e7f4cfad9584d09ad28301300b.scope: Deactivated successfully.
Nov 25 20:06:12 compute-0 ceph-mon[75144]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:12 compute-0 ceph-mon[75144]: Saving service crash spec with placement *
Nov 25 20:06:12 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2601804189' entity='client.admin' 
Nov 25 20:06:12 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:12 compute-0 podman[78382]: 2025-11-25 20:06:12.820879164 +0000 UTC m=+0.038115329 container create 419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc (image=quay.io/ceph/ceph:v18, name=elastic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:12 compute-0 sudo[78380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 20:06:12 compute-0 sudo[78380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:12 compute-0 systemd[1]: Started libpod-conmon-419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc.scope.
Nov 25 20:06:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f0011a089742291d08f09e3cce5b2aa3393807d650fee2e7fef00b5a4ab457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f0011a089742291d08f09e3cce5b2aa3393807d650fee2e7fef00b5a4ab457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f0011a089742291d08f09e3cce5b2aa3393807d650fee2e7fef00b5a4ab457/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:12 compute-0 podman[78382]: 2025-11-25 20:06:12.808247729 +0000 UTC m=+0.025483914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:12 compute-0 podman[78382]: 2025-11-25 20:06:12.919049034 +0000 UTC m=+0.136285229 container init 419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc (image=quay.io/ceph/ceph:v18, name=elastic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:06:12 compute-0 podman[78382]: 2025-11-25 20:06:12.926391514 +0000 UTC m=+0.143627679 container start 419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc (image=quay.io/ceph/ceph:v18, name=elastic_kalam, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:12 compute-0 podman[78382]: 2025-11-25 20:06:12.929625741 +0000 UTC m=+0.146861906 container attach 419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc (image=quay.io/ceph/ceph:v18, name=elastic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 20:06:13 compute-0 sudo[78380]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:13 compute-0 sudo[78445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:13 compute-0 sudo[78445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:13 compute-0 sudo[78445]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:13 compute-0 sudo[78470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:13 compute-0 sudo[78470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:13 compute-0 sudo[78470]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:13 compute-0 sudo[78513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:13 compute-0 sudo[78513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:13 compute-0 sudo[78513]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:13 compute-0 sudo[78539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- inventory --format=json-pretty --filter-for-batch
Nov 25 20:06:13 compute-0 sudo[78539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:13 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 20:06:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:13 compute-0 ceph-mgr[75443]: [cephadm INFO root] Added label _admin to host compute-0
Nov 25 20:06:13 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 25 20:06:13 compute-0 elastic_kalam[78421]: Added label _admin to host compute-0
Nov 25 20:06:13 compute-0 systemd[1]: libpod-419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc.scope: Deactivated successfully.
Nov 25 20:06:13 compute-0 podman[78382]: 2025-11-25 20:06:13.470167068 +0000 UTC m=+0.687403263 container died 419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc (image=quay.io/ceph/ceph:v18, name=elastic_kalam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:06:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-18f0011a089742291d08f09e3cce5b2aa3393807d650fee2e7fef00b5a4ab457-merged.mount: Deactivated successfully.
Nov 25 20:06:13 compute-0 podman[78382]: 2025-11-25 20:06:13.528342231 +0000 UTC m=+0.745578436 container remove 419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc (image=quay.io/ceph/ceph:v18, name=elastic_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:13 compute-0 systemd[1]: libpod-conmon-419a9f1eba41f99b091cc92b340d0107ca45f00126b57d872c2a41f5ea5b6dcc.scope: Deactivated successfully.
Nov 25 20:06:13 compute-0 podman[78581]: 2025-11-25 20:06:13.599988511 +0000 UTC m=+0.043117995 container create 0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266 (image=quay.io/ceph/ceph:v18, name=fervent_lumiere, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:06:13 compute-0 systemd[1]: Started libpod-conmon-0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266.scope.
Nov 25 20:06:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e39e98a689cc30efee9f513a07c53b4d071f721fad4947a799dbc9806a5fb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e39e98a689cc30efee9f513a07c53b4d071f721fad4947a799dbc9806a5fb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e39e98a689cc30efee9f513a07c53b4d071f721fad4947a799dbc9806a5fb5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:13 compute-0 podman[78581]: 2025-11-25 20:06:13.672101402 +0000 UTC m=+0.115230956 container init 0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266 (image=quay.io/ceph/ceph:v18, name=fervent_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:13 compute-0 podman[78581]: 2025-11-25 20:06:13.581428255 +0000 UTC m=+0.024557739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:13 compute-0 podman[78581]: 2025-11-25 20:06:13.679078402 +0000 UTC m=+0.122207916 container start 0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266 (image=quay.io/ceph/ceph:v18, name=fervent_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:13 compute-0 podman[78581]: 2025-11-25 20:06:13.682995698 +0000 UTC m=+0.126125262 container attach 0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266 (image=quay.io/ceph/ceph:v18, name=fervent_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:06:13 compute-0 ceph-mon[75144]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:13 compute-0 podman[78641]: 2025-11-25 20:06:13.863348696 +0000 UTC m=+0.065711090 container create e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:06:13 compute-0 systemd[1]: Started libpod-conmon-e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56.scope.
Nov 25 20:06:13 compute-0 podman[78641]: 2025-11-25 20:06:13.835725674 +0000 UTC m=+0.038088128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:13 compute-0 podman[78641]: 2025-11-25 20:06:13.956572952 +0000 UTC m=+0.158935346 container init e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:13 compute-0 podman[78641]: 2025-11-25 20:06:13.966712308 +0000 UTC m=+0.169074702 container start e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:06:13 compute-0 podman[78641]: 2025-11-25 20:06:13.9704703 +0000 UTC m=+0.172832734 container attach e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:06:13 compute-0 fervent_keldysh[78657]: 167 167
Nov 25 20:06:13 compute-0 systemd[1]: libpod-e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56.scope: Deactivated successfully.
Nov 25 20:06:13 compute-0 podman[78641]: 2025-11-25 20:06:13.974389096 +0000 UTC m=+0.176751480 container died e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0458d001fe82515d0b16206a7e944a3d1ef1660c2029b688bdc4577eb47e650-merged.mount: Deactivated successfully.
Nov 25 20:06:14 compute-0 podman[78641]: 2025-11-25 20:06:14.018184278 +0000 UTC m=+0.220546632 container remove e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:14 compute-0 systemd[1]: libpod-conmon-e205b8889356a1decbcb68f9cc5fcb231a580ce85a809b6504c2e42d99084e56.scope: Deactivated successfully.
Nov 25 20:06:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 25 20:06:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1207334056' entity='client.admin' 
Nov 25 20:06:14 compute-0 systemd[1]: libpod-0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266.scope: Deactivated successfully.
Nov 25 20:06:14 compute-0 podman[78581]: 2025-11-25 20:06:14.235726937 +0000 UTC m=+0.678856411 container died 0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266 (image=quay.io/ceph/ceph:v18, name=fervent_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-05e39e98a689cc30efee9f513a07c53b4d071f721fad4947a799dbc9806a5fb5-merged.mount: Deactivated successfully.
Nov 25 20:06:14 compute-0 podman[78581]: 2025-11-25 20:06:14.283120406 +0000 UTC m=+0.726249880 container remove 0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266 (image=quay.io/ceph/ceph:v18, name=fervent_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:14 compute-0 systemd[1]: libpod-conmon-0972a6b2ab665174e91283797da98ca9eac6f4381757f04a38314f2ea4859266.scope: Deactivated successfully.
Nov 25 20:06:14 compute-0 podman[78704]: 2025-11-25 20:06:14.357454498 +0000 UTC m=+0.050689150 container create 7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b (image=quay.io/ceph/ceph:v18, name=mystifying_antonelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:06:14 compute-0 systemd[1]: Started libpod-conmon-7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b.scope.
Nov 25 20:06:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a82fb7bf3dc0e0f47b6050ee25b4178bf606b5fe72d103fce9064f141b0b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a82fb7bf3dc0e0f47b6050ee25b4178bf606b5fe72d103fce9064f141b0b17/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a82fb7bf3dc0e0f47b6050ee25b4178bf606b5fe72d103fce9064f141b0b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:14 compute-0 podman[78704]: 2025-11-25 20:06:14.333427464 +0000 UTC m=+0.026662166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:14 compute-0 podman[78704]: 2025-11-25 20:06:14.442086741 +0000 UTC m=+0.135321443 container init 7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b (image=quay.io/ceph/ceph:v18, name=mystifying_antonelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:14 compute-0 podman[78704]: 2025-11-25 20:06:14.448023142 +0000 UTC m=+0.141257824 container start 7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b (image=quay.io/ceph/ceph:v18, name=mystifying_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:06:14 compute-0 podman[78704]: 2025-11-25 20:06:14.451475097 +0000 UTC m=+0.144709759 container attach 7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b (image=quay.io/ceph/ceph:v18, name=mystifying_antonelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:14 compute-0 ceph-mgr[75443]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 20:06:14 compute-0 ceph-mon[75144]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:14 compute-0 ceph-mon[75144]: Added label _admin to host compute-0
Nov 25 20:06:14 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1207334056' entity='client.admin' 
Nov 25 20:06:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 25 20:06:15 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4046453591' entity='client.admin' 
Nov 25 20:06:15 compute-0 mystifying_antonelli[78720]: set mgr/dashboard/cluster/status
Nov 25 20:06:15 compute-0 systemd[1]: libpod-7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b.scope: Deactivated successfully.
Nov 25 20:06:15 compute-0 podman[78704]: 2025-11-25 20:06:15.122077681 +0000 UTC m=+0.815312323 container died 7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b (image=quay.io/ceph/ceph:v18, name=mystifying_antonelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-38a82fb7bf3dc0e0f47b6050ee25b4178bf606b5fe72d103fce9064f141b0b17-merged.mount: Deactivated successfully.
Nov 25 20:06:15 compute-0 podman[78704]: 2025-11-25 20:06:15.16060228 +0000 UTC m=+0.853836932 container remove 7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b (image=quay.io/ceph/ceph:v18, name=mystifying_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:15 compute-0 systemd[1]: libpod-conmon-7e8062a4f8fbd9da3a4109ff5a03726bfdedf3242764a56b6a82f850b117b36b.scope: Deactivated successfully.
Nov 25 20:06:15 compute-0 sudo[74139]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:15 compute-0 podman[78764]: 2025-11-25 20:06:15.330954635 +0000 UTC m=+0.050502626 container create 20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:15 compute-0 systemd[1]: Started libpod-conmon-20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4.scope.
Nov 25 20:06:15 compute-0 podman[78764]: 2025-11-25 20:06:15.307859116 +0000 UTC m=+0.027407177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafc6eae1bebedba9a685363a4a1b039f17ed0584bd2d6017b304744735499ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafc6eae1bebedba9a685363a4a1b039f17ed0584bd2d6017b304744735499ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafc6eae1bebedba9a685363a4a1b039f17ed0584bd2d6017b304744735499ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafc6eae1bebedba9a685363a4a1b039f17ed0584bd2d6017b304744735499ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:15 compute-0 podman[78764]: 2025-11-25 20:06:15.438783978 +0000 UTC m=+0.158332049 container init 20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:06:15 compute-0 podman[78764]: 2025-11-25 20:06:15.45393971 +0000 UTC m=+0.173487731 container start 20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:15 compute-0 podman[78764]: 2025-11-25 20:06:15.458271839 +0000 UTC m=+0.177819910 container attach 20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:15 compute-0 sudo[78809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuknfgspduzhzflyrrhtgrufoxyrvxyr ; /usr/bin/python3'
Nov 25 20:06:15 compute-0 sudo[78809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:15 compute-0 python3[78811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:15 compute-0 podman[78812]: 2025-11-25 20:06:15.862437834 +0000 UTC m=+0.069072870 container create aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605 (image=quay.io/ceph/ceph:v18, name=priceless_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:06:15 compute-0 systemd[1]: Started libpod-conmon-aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605.scope.
Nov 25 20:06:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0999deb74846cf4dc82c8a42e1f651953afcb35563238092a208ee2ce33950e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0999deb74846cf4dc82c8a42e1f651953afcb35563238092a208ee2ce33950e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:15 compute-0 podman[78812]: 2025-11-25 20:06:15.836361345 +0000 UTC m=+0.042996411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:15 compute-0 podman[78812]: 2025-11-25 20:06:15.942657317 +0000 UTC m=+0.149292353 container init aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605 (image=quay.io/ceph/ceph:v18, name=priceless_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:15 compute-0 podman[78812]: 2025-11-25 20:06:15.953386419 +0000 UTC m=+0.160021485 container start aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605 (image=quay.io/ceph/ceph:v18, name=priceless_jepsen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:15 compute-0 podman[78812]: 2025-11-25 20:06:15.957278445 +0000 UTC m=+0.163913491 container attach aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605 (image=quay.io/ceph/ceph:v18, name=priceless_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:06:16 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4046453591' entity='client.admin' 
Nov 25 20:06:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 25 20:06:16 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2781791921' entity='client.admin' 
Nov 25 20:06:16 compute-0 systemd[1]: libpod-aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605.scope: Deactivated successfully.
Nov 25 20:06:16 compute-0 podman[78812]: 2025-11-25 20:06:16.547131653 +0000 UTC m=+0.753766679 container died aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605 (image=quay.io/ceph/ceph:v18, name=priceless_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0999deb74846cf4dc82c8a42e1f651953afcb35563238092a208ee2ce33950e-merged.mount: Deactivated successfully.
Nov 25 20:06:16 compute-0 podman[78812]: 2025-11-25 20:06:16.591276963 +0000 UTC m=+0.797911999 container remove aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605 (image=quay.io/ceph/ceph:v18, name=priceless_jepsen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:16 compute-0 systemd[1]: libpod-conmon-aa98e6bf55b3f0d690423a17b5935d28d6c5ffa491ccbff90bf80aee86efa605.scope: Deactivated successfully.
Nov 25 20:06:16 compute-0 sudo[78809]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:16 compute-0 ceph-mgr[75443]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 20:06:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:16 compute-0 ceph-mon[75144]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 20:06:17 compute-0 sweet_hopper[78781]: [
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:     {
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "available": false,
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "ceph_device": false,
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "lsm_data": {},
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "lvs": [],
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "path": "/dev/sr0",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "rejected_reasons": [
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "Has a FileSystem",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "Insufficient space (<5GB)"
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         ],
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         "sys_api": {
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "actuators": null,
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "device_nodes": "sr0",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "devname": "sr0",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "human_readable_size": "482.00 KB",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "id_bus": "ata",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "model": "QEMU DVD-ROM",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "nr_requests": "2",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "parent": "/dev/sr0",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "partitions": {},
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "path": "/dev/sr0",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "removable": "1",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "rev": "2.5+",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "ro": "0",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "rotational": "1",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "sas_address": "",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "sas_device_handle": "",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "scheduler_mode": "mq-deadline",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "sectors": 0,
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "sectorsize": "2048",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "size": 493568.0,
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "support_discard": "2048",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "type": "disk",
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:             "vendor": "QEMU"
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:         }
Nov 25 20:06:17 compute-0 sweet_hopper[78781]:     }
Nov 25 20:06:17 compute-0 sweet_hopper[78781]: ]
Nov 25 20:06:17 compute-0 systemd[1]: libpod-20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4.scope: Deactivated successfully.
Nov 25 20:06:17 compute-0 systemd[1]: libpod-20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4.scope: Consumed 1.670s CPU time.
Nov 25 20:06:17 compute-0 podman[78764]: 2025-11-25 20:06:17.074063889 +0000 UTC m=+1.793611910 container died 20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bafc6eae1bebedba9a685363a4a1b039f17ed0584bd2d6017b304744735499ff-merged.mount: Deactivated successfully.
Nov 25 20:06:17 compute-0 podman[78764]: 2025-11-25 20:06:17.130250118 +0000 UTC m=+1.849798109 container remove 20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:06:17 compute-0 systemd[1]: libpod-conmon-20bc5219d238afdf7c7ad366767b5b686feb61bda76af0e4ed21f0e48f73edf4.scope: Deactivated successfully.
Nov 25 20:06:17 compute-0 sudo[78539]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:06:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:06:17 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 20:06:17 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 20:06:17 compute-0 sudo[80833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:17 compute-0 sudo[80833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[80833]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[80858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 20:06:17 compute-0 sudo[80858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[80858]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[80883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:17 compute-0 sudo[80883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[80883]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[80908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph
Nov 25 20:06:17 compute-0 sudo[80908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[80908]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2781791921' entity='client.admin' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:17 compute-0 ceph-mon[75144]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:06:17 compute-0 sudo[80956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:17 compute-0 sudo[80956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[80956]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[81005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.conf.new
Nov 25 20:06:17 compute-0 sudo[81005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[81005]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[81055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjujlradwcldaxvpwrerkgzwzidudvpd ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101177.0115907-36538-207049793401210/async_wrapper.py j803191766078 30 /home/zuul/.ansible/tmp/ansible-tmp-1764101177.0115907-36538-207049793401210/AnsiballZ_command.py _'
Nov 25 20:06:17 compute-0 sudo[81055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:17 compute-0 sudo[81056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:17 compute-0 sudo[81056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[81056]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[81083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:17 compute-0 sudo[81083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 ansible-async_wrapper.py[81063]: Invoked with j803191766078 30 /home/zuul/.ansible/tmp/ansible-tmp-1764101177.0115907-36538-207049793401210/AnsiballZ_command.py _
Nov 25 20:06:17 compute-0 sudo[81083]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 ansible-async_wrapper.py[81110]: Starting module and watcher
Nov 25 20:06:17 compute-0 ansible-async_wrapper.py[81110]: Start watching 81111 (30)
Nov 25 20:06:17 compute-0 ansible-async_wrapper.py[81111]: Start module (81111)
Nov 25 20:06:17 compute-0 ansible-async_wrapper.py[81063]: Return async_wrapper task started.
Nov 25 20:06:17 compute-0 sudo[81055]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[81112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:17 compute-0 sudo[81112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[81112]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 sudo[81138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.conf.new
Nov 25 20:06:17 compute-0 sudo[81138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:17 compute-0 sudo[81138]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:17 compute-0 python3[81113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:18.014775103 +0000 UTC m=+0.047705359 container create bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55 (image=quay.io/ceph/ceph:v18, name=competent_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:18 compute-0 systemd[1]: Started libpod-conmon-bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55.scope.
Nov 25 20:06:18 compute-0 sudo[81192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:18 compute-0 sudo[81192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81192]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8259281ddc5f658ccad0d929db011b3e1dc99f640a27281ebd09e6749ccf49b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8259281ddc5f658ccad0d929db011b3e1dc99f640a27281ebd09e6749ccf49b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:17.993250397 +0000 UTC m=+0.026180693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:18.108890323 +0000 UTC m=+0.141820639 container init bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55 (image=quay.io/ceph/ceph:v18, name=competent_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:18.121455015 +0000 UTC m=+0.154385301 container start bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55 (image=quay.io/ceph/ceph:v18, name=competent_khayyam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:18.125017102 +0000 UTC m=+0.157947388 container attach bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55 (image=quay.io/ceph/ceph:v18, name=competent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:06:18 compute-0 sudo[81230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.conf.new
Nov 25 20:06:18 compute-0 sudo[81230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81230]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:18 compute-0 sudo[81256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81256]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.conf.new
Nov 25 20:06:18 compute-0 sudo[81281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81281]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:18 compute-0 sudo[81306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81306]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 20:06:18 compute-0 sudo[81341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81341]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf
Nov 25 20:06:18 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf
Nov 25 20:06:18 compute-0 ceph-mon[75144]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 20:06:18 compute-0 sudo[81375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:18 compute-0 sudo[81375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81375]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config
Nov 25 20:06:18 compute-0 sudo[81400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81400]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:06:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:18 compute-0 competent_khayyam[81226]: 
Nov 25 20:06:18 compute-0 competent_khayyam[81226]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 20:06:18 compute-0 systemd[1]: libpod-bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55.scope: Deactivated successfully.
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:18.72592429 +0000 UTC m=+0.758854636 container died bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55 (image=quay.io/ceph/ceph:v18, name=competent_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:18 compute-0 sudo[81425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:18 compute-0 sudo[81425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81425]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8259281ddc5f658ccad0d929db011b3e1dc99f640a27281ebd09e6749ccf49b2-merged.mount: Deactivated successfully.
Nov 25 20:06:18 compute-0 podman[81183]: 2025-11-25 20:06:18.774880543 +0000 UTC m=+0.807810799 container remove bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55 (image=quay.io/ceph/ceph:v18, name=competent_khayyam, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:06:18 compute-0 systemd[1]: libpod-conmon-bf0893703e63b187bdfb655c7ecbe13326c27e8c617a2056dffddc1721c96b55.scope: Deactivated successfully.
Nov 25 20:06:18 compute-0 ansible-async_wrapper.py[81111]: Module complete (81111)
Nov 25 20:06:18 compute-0 sudo[81459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config
Nov 25 20:06:18 compute-0 sudo[81459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81459]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:18 compute-0 sudo[81488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81488]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:18 compute-0 sudo[81536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf.new
Nov 25 20:06:18 compute-0 sudo[81536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:18 compute-0 sudo[81536]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81561]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxasnrchcmqtlxywvgmdcrghboljltxt ; /usr/bin/python3'
Nov 25 20:06:19 compute-0 sudo[81586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:19 compute-0 sudo[81632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:19 compute-0 sudo[81586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81586]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81637]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 python3[81635]: ansible-ansible.legacy.async_status Invoked with jid=j803191766078.81063 mode=status _async_dir=/root/.ansible_async
Nov 25 20:06:19 compute-0 sudo[81632]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf.new
Nov 25 20:06:19 compute-0 sudo[81662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81662]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fquuxeuvgqubkwiosucsbzpvptmjkjsr ; /usr/bin/python3'
Nov 25 20:06:19 compute-0 sudo[81733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:19 compute-0 sudo[81733]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf.new
Nov 25 20:06:19 compute-0 sudo[81784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81784]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 ceph-mon[75144]: Updating compute-0:/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf
Nov 25 20:06:19 compute-0 ceph-mon[75144]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:06:19 compute-0 ceph-mon[75144]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:19 compute-0 sudo[81809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81809]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 python3[81783]: ansible-ansible.legacy.async_status Invoked with jid=j803191766078.81063 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 20:06:19 compute-0 sudo[81779]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf.new
Nov 25 20:06:19 compute-0 sudo[81834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81834]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81859]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf.new /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.conf
Nov 25 20:06:19 compute-0 sudo[81884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81884]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 20:06:19 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 20:06:19 compute-0 sudo[81909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81909]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 20:06:19 compute-0 sudo[81934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81934]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:19 compute-0 sudo[81990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvqboyrjydzjvtdpysytmidphmbppbbe ; /usr/bin/python3'
Nov 25 20:06:19 compute-0 sudo[81990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:19 compute-0 sudo[81977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:19 compute-0 sudo[81977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:19 compute-0 sudo[81977]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph
Nov 25 20:06:20 compute-0 sudo[82010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82010]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 python3[82006]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 20:06:20 compute-0 sudo[82035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:20 compute-0 sudo[82035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82035]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[81990]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.client.admin.keyring.new
Nov 25 20:06:20 compute-0 sudo[82061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82061]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:20 compute-0 sudo[82087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82087]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:20 compute-0 sudo[82112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82112]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:20 compute-0 sudo[82137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82137]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.client.admin.keyring.new
Nov 25 20:06:20 compute-0 sudo[82206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqopctscyylwmhmcquzmpxgifmyzabc ; /usr/bin/python3'
Nov 25 20:06:20 compute-0 sudo[82163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:20 compute-0 sudo[82163]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 ceph-mon[75144]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 20:06:20 compute-0 sudo[82236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:20 compute-0 sudo[82236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82236]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 python3[82212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:20 compute-0 sudo[82261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.client.admin.keyring.new
Nov 25 20:06:20 compute-0 sudo[82261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82261]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:20 compute-0 podman[82279]: 2025-11-25 20:06:20.724859005 +0000 UTC m=+0.062329486 container create 4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:20 compute-0 systemd[1]: Started libpod-conmon-4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797.scope.
Nov 25 20:06:20 compute-0 sudo[82298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:20 compute-0 sudo[82298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82298]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 podman[82279]: 2025-11-25 20:06:20.705300243 +0000 UTC m=+0.042770774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97415b5ac51ba6dc900e7f87b6f273953d6ec6925cf631206c3c7ab59fecb4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97415b5ac51ba6dc900e7f87b6f273953d6ec6925cf631206c3c7ab59fecb4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97415b5ac51ba6dc900e7f87b6f273953d6ec6925cf631206c3c7ab59fecb4c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:20 compute-0 podman[82279]: 2025-11-25 20:06:20.837541231 +0000 UTC m=+0.175011742 container init 4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:06:20 compute-0 podman[82279]: 2025-11-25 20:06:20.845341793 +0000 UTC m=+0.182812234 container start 4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:06:20 compute-0 podman[82279]: 2025-11-25 20:06:20.848995052 +0000 UTC m=+0.186465483 container attach 4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:20 compute-0 sudo[82330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.client.admin.keyring.new
Nov 25 20:06:20 compute-0 sudo[82330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82330]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:20 compute-0 sudo[82356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:20 compute-0 sudo[82356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:20 compute-0 sudo[82356]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 20:06:21 compute-0 sudo[82381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82381]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring
Nov 25 20:06:21 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring
Nov 25 20:06:21 compute-0 sudo[82406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:21 compute-0 sudo[82406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82406]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:21 compute-0 sudo[82431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config
Nov 25 20:06:21 compute-0 sudo[82431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82431]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:21 compute-0 sudo[82475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82475]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config
Nov 25 20:06:21 compute-0 sudo[82500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:06:21 compute-0 dreamy_hermann[82326]: 
Nov 25 20:06:21 compute-0 dreamy_hermann[82326]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 20:06:21 compute-0 sudo[82500]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 systemd[1]: libpod-4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797.scope: Deactivated successfully.
Nov 25 20:06:21 compute-0 podman[82279]: 2025-11-25 20:06:21.395873911 +0000 UTC m=+0.733344392 container died 4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e97415b5ac51ba6dc900e7f87b6f273953d6ec6925cf631206c3c7ab59fecb4c-merged.mount: Deactivated successfully.
Nov 25 20:06:21 compute-0 podman[82279]: 2025-11-25 20:06:21.441132693 +0000 UTC m=+0.778603134 container remove 4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:21 compute-0 systemd[1]: libpod-conmon-4f797de75acae418891c33c6e7bde82d5c6b66d3f733963fc5d2fe74db8c1797.scope: Deactivated successfully.
Nov 25 20:06:21 compute-0 sudo[82527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:21 compute-0 sudo[82527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82206]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82527]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring.new
Nov 25 20:06:21 compute-0 ceph-mon[75144]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:21 compute-0 sudo[82563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82563]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:21 compute-0 sudo[82588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82588]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:21 compute-0 sudo[82613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82613]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koqdopysibzkdorqijpolvazszbvwkkn ; /usr/bin/python3'
Nov 25 20:06:21 compute-0 sudo[82659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:21 compute-0 sudo[82664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:21 compute-0 sudo[82664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82664]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 sudo[82689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring.new
Nov 25 20:06:21 compute-0 sudo[82689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:21 compute-0 sudo[82689]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:21 compute-0 python3[82663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:22.002068714 +0000 UTC m=+0.070201641 container create 4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f (image=quay.io/ceph/ceph:v18, name=optimistic_neumann, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:22 compute-0 systemd[1]: Started libpod-conmon-4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f.scope.
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:21.978149143 +0000 UTC m=+0.046282100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:22 compute-0 sudo[82748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3eb537c50918ff1c1c691364a95e51058bed4de10ce9f70147fc4243d505c91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3eb537c50918ff1c1c691364a95e51058bed4de10ce9f70147fc4243d505c91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3eb537c50918ff1c1c691364a95e51058bed4de10ce9f70147fc4243d505c91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:22 compute-0 sudo[82748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82748]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:22.113504796 +0000 UTC m=+0.181637743 container init 4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f (image=quay.io/ceph/ceph:v18, name=optimistic_neumann, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:22.123428156 +0000 UTC m=+0.191561103 container start 4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f (image=quay.io/ceph/ceph:v18, name=optimistic_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:22.127583439 +0000 UTC m=+0.195716386 container attach 4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f (image=quay.io/ceph/ceph:v18, name=optimistic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:22 compute-0 sudo[82781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring.new
Nov 25 20:06:22 compute-0 sudo[82781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82781]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 sudo[82807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:22 compute-0 sudo[82807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82807]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 sudo[82832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring.new
Nov 25 20:06:22 compute-0 sudo[82832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82832]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 sudo[82857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:22 compute-0 sudo[82857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82857]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 sudo[82882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-712dd110-763a-5547-8ef7-acda1414fdce/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring.new /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring
Nov 25 20:06:22 compute-0 sudo[82882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82882]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:22 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev d5ebcd91-088f-4af9-93f3-988570312e99 (Updating crash deployment (+1 -> 1))
Nov 25 20:06:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 20:06:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:22 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 25 20:06:22 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 25 20:06:22 compute-0 ceph-mon[75144]: Updating compute-0:/var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/config/ceph.client.admin.keyring
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 20:06:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:22 compute-0 sudo[82926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:22 compute-0 sudo[82926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82926]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 sudo[82951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:22 compute-0 sudo[82951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82951]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 25 20:06:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3101693558' entity='client.admin' 
Nov 25 20:06:22 compute-0 systemd[1]: libpod-4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f.scope: Deactivated successfully.
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:22.730301248 +0000 UTC m=+0.798434155 container died 4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f (image=quay.io/ceph/ceph:v18, name=optimistic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:06:22 compute-0 sudo[82976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:22 compute-0 sudo[82976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[82976]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3eb537c50918ff1c1c691364a95e51058bed4de10ce9f70147fc4243d505c91-merged.mount: Deactivated successfully.
Nov 25 20:06:22 compute-0 podman[82715]: 2025-11-25 20:06:22.766225615 +0000 UTC m=+0.834358522 container remove 4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f (image=quay.io/ceph/ceph:v18, name=optimistic_neumann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:22 compute-0 sudo[82659]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:22 compute-0 ansible-async_wrapper.py[81110]: Done in kid B.
Nov 25 20:06:22 compute-0 systemd[1]: libpod-conmon-4b3a8ee3df42b57b3929a0d635f6bfa29f8ee2bbeecfe99393564fcde21e7b7f.scope: Deactivated successfully.
Nov 25 20:06:22 compute-0 sudo[83009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:22 compute-0 sudo[83009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:22 compute-0 sudo[83071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwmbvvuujstzfjjoyzanbezhptaisak ; /usr/bin/python3'
Nov 25 20:06:22 compute-0 sudo[83071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:23 compute-0 python3[83075]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.207909301 +0000 UTC m=+0.047195475 container create 8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:06:23 compute-0 podman[83109]: 2025-11-25 20:06:23.233992182 +0000 UTC m=+0.058773790 container create c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399 (image=quay.io/ceph/ceph:v18, name=nifty_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:23 compute-0 systemd[1]: Started libpod-conmon-8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c.scope.
Nov 25 20:06:23 compute-0 systemd[1]: Started libpod-conmon-c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399.scope.
Nov 25 20:06:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65e781d3e31d025bb8de423c06a748a352f92c78ccd40e2d4e1813e8cf4e892/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65e781d3e31d025bb8de423c06a748a352f92c78ccd40e2d4e1813e8cf4e892/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65e781d3e31d025bb8de423c06a748a352f92c78ccd40e2d4e1813e8cf4e892/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.186908041 +0000 UTC m=+0.026194235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.282958884 +0000 UTC m=+0.122245148 container init 8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:23 compute-0 podman[83109]: 2025-11-25 20:06:23.28945554 +0000 UTC m=+0.114237188 container init c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399 (image=quay.io/ceph/ceph:v18, name=nifty_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.29201207 +0000 UTC m=+0.131298244 container start 8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:23 compute-0 happy_meitner[83132]: 167 167
Nov 25 20:06:23 compute-0 systemd[1]: libpod-8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c.scope: Deactivated successfully.
Nov 25 20:06:23 compute-0 podman[83109]: 2025-11-25 20:06:23.296355318 +0000 UTC m=+0.121136936 container start c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399 (image=quay.io/ceph/ceph:v18, name=nifty_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.297082438 +0000 UTC m=+0.136368622 container attach 8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.297655673 +0000 UTC m=+0.136941827 container died 8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:06:23 compute-0 podman[83109]: 2025-11-25 20:06:23.205566027 +0000 UTC m=+0.030347735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:23 compute-0 podman[83109]: 2025-11-25 20:06:23.30895038 +0000 UTC m=+0.133731988 container attach c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399 (image=quay.io/ceph/ceph:v18, name=nifty_haibt, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-10fc9c0634492c618f8be2b6abd3c61a152b511e4a6035fde87143916b43e600-merged.mount: Deactivated successfully.
Nov 25 20:06:23 compute-0 podman[83102]: 2025-11-25 20:06:23.33464331 +0000 UTC m=+0.173929484 container remove 8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:23 compute-0 systemd[1]: libpod-conmon-8a6a5e19e2e20c160f91cf860ecda96ae28f163afe211fc3b62be8ca0b4e0e8c.scope: Deactivated successfully.
Nov 25 20:06:23 compute-0 systemd[1]: Reloading.
Nov 25 20:06:23 compute-0 systemd-rc-local-generator[83182]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:23 compute-0 systemd-sysv-generator[83185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:23 compute-0 ceph-mon[75144]: Deploying daemon crash.compute-0 on compute-0
Nov 25 20:06:23 compute-0 ceph-mon[75144]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:23 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3101693558' entity='client.admin' 
Nov 25 20:06:23 compute-0 systemd[1]: Reloading.
Nov 25 20:06:23 compute-0 systemd-sysv-generator[83242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:23 compute-0 systemd-rc-local-generator[83237]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 25 20:06:23 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3265309152' entity='client.admin' 
Nov 25 20:06:23 compute-0 podman[83252]: 2025-11-25 20:06:23.892621501 +0000 UTC m=+0.028263500 container died c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399 (image=quay.io/ceph/ceph:v18, name=nifty_haibt, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:23 compute-0 systemd[1]: libpod-c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399.scope: Deactivated successfully.
Nov 25 20:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c65e781d3e31d025bb8de423c06a748a352f92c78ccd40e2d4e1813e8cf4e892-merged.mount: Deactivated successfully.
Nov 25 20:06:23 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:06:23 compute-0 podman[83252]: 2025-11-25 20:06:23.982615989 +0000 UTC m=+0.118257918 container remove c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399 (image=quay.io/ceph/ceph:v18, name=nifty_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:06:23 compute-0 systemd[1]: libpod-conmon-c2b7bd62ecd12670895705d7bfc687eda29476d1711501bcaccc3501ef507399.scope: Deactivated successfully.
Nov 25 20:06:24 compute-0 sudo[83071]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:24 compute-0 sudo[83345]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqchpcpyquyklopgyfrbcmjdkkfmmeb ; /usr/bin/python3'
Nov 25 20:06:24 compute-0 sudo[83345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:24 compute-0 podman[83320]: 2025-11-25 20:06:24.290767383 +0000 UTC m=+0.073402608 container create 04730c9747e1c9f7cb55667ced492bdf302351bf90a386a5ef539f6c3741b5ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:06:24 compute-0 podman[83320]: 2025-11-25 20:06:24.2576162 +0000 UTC m=+0.040251475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9373af4b4309052909c2b78faecae247747f986b465e7be56f388f6f4e66fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9373af4b4309052909c2b78faecae247747f986b465e7be56f388f6f4e66fa/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9373af4b4309052909c2b78faecae247747f986b465e7be56f388f6f4e66fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9373af4b4309052909c2b78faecae247747f986b465e7be56f388f6f4e66fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 podman[83320]: 2025-11-25 20:06:24.395976264 +0000 UTC m=+0.178611539 container init 04730c9747e1c9f7cb55667ced492bdf302351bf90a386a5ef539f6c3741b5ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:06:24 compute-0 podman[83320]: 2025-11-25 20:06:24.402255036 +0000 UTC m=+0.184890261 container start 04730c9747e1c9f7cb55667ced492bdf302351bf90a386a5ef539f6c3741b5ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:06:24 compute-0 bash[83320]: 04730c9747e1c9f7cb55667ced492bdf302351bf90a386a5ef539f6c3741b5ea
Nov 25 20:06:24 compute-0 systemd[1]: Started Ceph crash.compute-0 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:06:24 compute-0 python3[83352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:24 compute-0 sudo[83009]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev d5ebcd91-088f-4af9-93f3-988570312e99 (Updating crash deployment (+1 -> 1))
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event d5ebcd91-088f-4af9-93f3-988570312e99 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 25b143ab-cfff-4dbd-8985-b322ecffc481 does not exist
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev 9d41468c-8f8c-4426-a925-1e8fa027a33f (Updating mgr deployment (+1 -> 2))
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cvdjmy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cvdjmy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 20:06:24 compute-0 podman[83360]: 2025-11-25 20:06:24.526678171 +0000 UTC m=+0.069348848 container create dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf (image=quay.io/ceph/ceph:v18, name=boring_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cvdjmy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:06:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.cvdjmy on compute-0
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.cvdjmy on compute-0
Nov 25 20:06:24 compute-0 podman[83360]: 2025-11-25 20:06:24.502665258 +0000 UTC m=+0.045336015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:24 compute-0 systemd[1]: Started libpod-conmon-dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf.scope.
Nov 25 20:06:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ec85ebf5c0a6b763eb844559581c852cbda747d3651bc167c8bfd16f6c9d3b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ec85ebf5c0a6b763eb844559581c852cbda747d3651bc167c8bfd16f6c9d3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ec85ebf5c0a6b763eb844559581c852cbda747d3651bc167c8bfd16f6c9d3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:24 compute-0 sudo[83375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:24 compute-0 podman[83360]: 2025-11-25 20:06:24.653527232 +0000 UTC m=+0.196197999 container init dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf (image=quay.io/ceph/ceph:v18, name=boring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:24 compute-0 sudo[83375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:24 compute-0 podman[83360]: 2025-11-25 20:06:24.660729468 +0000 UTC m=+0.203400145 container start dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf (image=quay.io/ceph/ceph:v18, name=boring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:24 compute-0 sudo[83375]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:24 compute-0 podman[83360]: 2025-11-25 20:06:24.666732621 +0000 UTC m=+0.209403298 container attach dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf (image=quay.io/ceph/ceph:v18, name=boring_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 20:06:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:24 compute-0 sudo[83406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:24 compute-0 sudo[83406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:24 compute-0 sudo[83406]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:24 compute-0 sudo[83432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:24 compute-0 sudo[83432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3265309152' entity='client.admin' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cvdjmy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cvdjmy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:06:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:24 compute-0 sudo[83432]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: 2025-11-25T20:06:24.861+0000 7f2a8dc9d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: 2025-11-25T20:06:24.861+0000 7f2a8dc9d640 -1 AuthRegistry(0x7f2a88066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: 2025-11-25T20:06:24.863+0000 7f2a8dc9d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: 2025-11-25T20:06:24.863+0000 7f2a8dc9d640 -1 AuthRegistry(0x7f2a8dc9c000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: 2025-11-25T20:06:24.864+0000 7f2a877fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: 2025-11-25T20:06:24.864+0000 7f2a8dc9d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 20:06:24 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-crash-compute-0[83355]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 20:06:24 compute-0 sudo[83457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:24 compute-0 sudo[83457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 25 20:06:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2847479358' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.337850311 +0000 UTC m=+0.046887377 container create e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:25 compute-0 systemd[1]: Started libpod-conmon-e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1.scope.
Nov 25 20:06:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.411893985 +0000 UTC m=+0.120931031 container init e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.317711522 +0000 UTC m=+0.026748568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.422452393 +0000 UTC m=+0.131489459 container start e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.426112732 +0000 UTC m=+0.135149788 container attach e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:25 compute-0 systemd[1]: libpod-e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1.scope: Deactivated successfully.
Nov 25 20:06:25 compute-0 dreamy_bassi[83570]: 167 167
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.429583406 +0000 UTC m=+0.138620442 container died e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:25 compute-0 conmon[83570]: conmon e48b9e9aa28d1fae782a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1.scope/container/memory.events
Nov 25 20:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6c5b32f0f2f7755a19c2ffb68d0aa26b3dd007f9270f3c681f4f673744cfb6-merged.mount: Deactivated successfully.
Nov 25 20:06:25 compute-0 podman[83554]: 2025-11-25 20:06:25.479553966 +0000 UTC m=+0.188591032 container remove e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:25 compute-0 systemd[1]: libpod-conmon-e48b9e9aa28d1fae782a67c18ad27d13d0455057ac20c103a84b7c0c6c424af1.scope: Deactivated successfully.
Nov 25 20:06:25 compute-0 systemd[1]: Reloading.
Nov 25 20:06:25 compute-0 systemd-sysv-generator[83619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:25 compute-0 systemd-rc-local-generator[83616]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 25 20:06:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:06:25 compute-0 ceph-mon[75144]: Deploying daemon mgr.compute-0.cvdjmy on compute-0
Nov 25 20:06:25 compute-0 ceph-mon[75144]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2847479358' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 20:06:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2847479358' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 20:06:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 20:06:25 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 25 20:06:25 compute-0 boring_joliot[83381]: set require_min_compat_client to mimic
Nov 25 20:06:25 compute-0 systemd[1]: Reloading.
Nov 25 20:06:25 compute-0 podman[83360]: 2025-11-25 20:06:25.880504084 +0000 UTC m=+1.423174771 container died dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf (image=quay.io/ceph/ceph:v18, name=boring_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:06:25 compute-0 systemd-sysv-generator[83664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:25 compute-0 systemd-rc-local-generator[83660]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:26 compute-0 systemd[1]: libpod-dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf.scope: Deactivated successfully.
Nov 25 20:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ec85ebf5c0a6b763eb844559581c852cbda747d3651bc167c8bfd16f6c9d3b-merged.mount: Deactivated successfully.
Nov 25 20:06:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:26 compute-0 systemd[1]: Starting Ceph mgr.compute-0.cvdjmy for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:06:26 compute-0 podman[83360]: 2025-11-25 20:06:26.117215275 +0000 UTC m=+1.659885992 container remove dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf (image=quay.io/ceph/ceph:v18, name=boring_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:06:26 compute-0 systemd[1]: libpod-conmon-dafeb6dcc9c325d0066d55002a2adabe372ea2c412034ee7829c1bcd4c8dd0bf.scope: Deactivated successfully.
Nov 25 20:06:26 compute-0 sudo[83345]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:26 compute-0 podman[83729]: 2025-11-25 20:06:26.465223243 +0000 UTC m=+0.069157393 container create ced4f25e213fa5fb0bf8b7048d60b781138f170b5d8274b0f44abce7c55ad641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc075466eeda59be010829720fd84b5eb4612a3aa3c9560b3d7d757e2c6c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc075466eeda59be010829720fd84b5eb4612a3aa3c9560b3d7d757e2c6c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc075466eeda59be010829720fd84b5eb4612a3aa3c9560b3d7d757e2c6c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc075466eeda59be010829720fd84b5eb4612a3aa3c9560b3d7d757e2c6c8/merged/var/lib/ceph/mgr/ceph-compute-0.cvdjmy supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:26 compute-0 podman[83729]: 2025-11-25 20:06:26.435260867 +0000 UTC m=+0.039195087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:26 compute-0 podman[83729]: 2025-11-25 20:06:26.577441986 +0000 UTC m=+0.181376166 container init ced4f25e213fa5fb0bf8b7048d60b781138f170b5d8274b0f44abce7c55ad641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:26 compute-0 podman[83729]: 2025-11-25 20:06:26.587019557 +0000 UTC m=+0.190953707 container start ced4f25e213fa5fb0bf8b7048d60b781138f170b5d8274b0f44abce7c55ad641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:06:26 compute-0 bash[83729]: ced4f25e213fa5fb0bf8b7048d60b781138f170b5d8274b0f44abce7c55ad641
Nov 25 20:06:26 compute-0 systemd[1]: Started Ceph mgr.compute-0.cvdjmy for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:06:26 compute-0 sudo[83457]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:26 compute-0 ceph-mgr[83748]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:06:26 compute-0 ceph-mgr[83748]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 20:06:26 compute-0 ceph-mgr[83748]: pidfile_write: ignore empty --pid-file
Nov 25 20:06:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 20:06:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev 9d41468c-8f8c-4426-a925-1e8fa027a33f (Updating mgr deployment (+1 -> 2))
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event 9d41468c-8f8c-4426-a925-1e8fa027a33f (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 25 20:06:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 20:06:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:26 compute-0 sudo[83796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtblprnzqceqvxyxbndwaijkskvaswij ; /usr/bin/python3'
Nov 25 20:06:26 compute-0 sudo[83796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:26 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'alerts'
Nov 25 20:06:26 compute-0 sudo[83797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [progress INFO root] Writing back 2 completed events
Nov 25 20:06:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 20:06:26 compute-0 sudo[83797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:26 compute-0 sudo[83797]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:06:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:06:26 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2847479358' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 20:06:26 compute-0 ceph-mon[75144]: osdmap e3: 0 total, 0 up, 0 in
Nov 25 20:06:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:26 compute-0 sudo[83824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:06:26 compute-0 sudo[83824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:26 compute-0 sudo[83824]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:26 compute-0 python3[83810]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:26 compute-0 sudo[83849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:26 compute-0 sudo[83849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:26 compute-0 sudo[83849]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:26 compute-0 podman[83853]: 2025-11-25 20:06:26.988465049 +0000 UTC m=+0.067042465 container create e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc (image=quay.io/ceph/ceph:v18, name=agitated_austin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:27 compute-0 sudo[83883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:27 compute-0 sudo[83883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:27 compute-0 sudo[83883]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:27 compute-0 podman[83853]: 2025-11-25 20:06:26.962685358 +0000 UTC m=+0.041262814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:27 compute-0 ceph-mgr[83748]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 20:06:27 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'balancer'
Nov 25 20:06:27 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:27.081+0000 7f611fe09140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 20:06:27 compute-0 systemd[1]: Started libpod-conmon-e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc.scope.
Nov 25 20:06:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:27 compute-0 sudo[83912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f359451400c408b0d8503e201a54b5edf53c51e41f9467d068082c259c101a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f359451400c408b0d8503e201a54b5edf53c51e41f9467d068082c259c101a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f359451400c408b0d8503e201a54b5edf53c51e41f9467d068082c259c101a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:27 compute-0 sudo[83912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:27 compute-0 sudo[83912]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:27 compute-0 podman[83853]: 2025-11-25 20:06:27.133558007 +0000 UTC m=+0.212135443 container init e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc (image=quay.io/ceph/ceph:v18, name=agitated_austin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:27 compute-0 podman[83853]: 2025-11-25 20:06:27.144692009 +0000 UTC m=+0.223269435 container start e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc (image=quay.io/ceph/ceph:v18, name=agitated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:06:27 compute-0 podman[83853]: 2025-11-25 20:06:27.149185711 +0000 UTC m=+0.227763127 container attach e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc (image=quay.io/ceph/ceph:v18, name=agitated_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:27 compute-0 sudo[83942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:06:27 compute-0 sudo[83942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:27 compute-0 ceph-mgr[83748]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 20:06:27 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'cephadm'
Nov 25 20:06:27 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:27.344+0000 7f611fe09140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 20:06:27 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:27 compute-0 sudo[84051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:27 compute-0 sudo[84051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:27 compute-0 sudo[84051]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:27 compute-0 podman[84067]: 2025-11-25 20:06:27.854771328 +0000 UTC m=+0.073833929 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:06:27 compute-0 sudo[84091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:27 compute-0 sudo[84091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:27 compute-0 sudo[84091]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:27 compute-0 ceph-mon[75144]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:27 compute-0 sudo[84122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:27 compute-0 sudo[84122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:27 compute-0 sudo[84122]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:27 compute-0 podman[84067]: 2025-11-25 20:06:27.980412716 +0000 UTC m=+0.199475277 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:28 compute-0 sudo[84148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 25 20:06:28 compute-0 sudo[84148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:28 compute-0 sudo[83942]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:28 compute-0 sudo[84148]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev c783fb47-2b45-40c8-b648-91162f1ff496 does not exist
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 57b3360f-af7a-4ea1-82a8-26b8fc6652d6 does not exist
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5aaf7a37-f067-496f-8719-ccbb4c5822dd does not exist
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO root] Added host compute-0
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 agitated_austin[83937]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 20:06:28 compute-0 agitated_austin[83937]: Scheduled mon update...
Nov 25 20:06:28 compute-0 agitated_austin[83937]: Scheduled mgr update...
Nov 25 20:06:28 compute-0 agitated_austin[83937]: Scheduled osd.default_drive_group update...
Nov 25 20:06:28 compute-0 systemd[1]: libpod-e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc.scope: Deactivated successfully.
Nov 25 20:06:28 compute-0 podman[83853]: 2025-11-25 20:06:28.430873792 +0000 UTC m=+1.509451268 container died e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc (image=quay.io/ceph/ceph:v18, name=agitated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:28 compute-0 sudo[84256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:28 compute-0 sudo[84256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:28 compute-0 sudo[84256]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2f359451400c408b0d8503e201a54b5edf53c51e41f9467d068082c259c101a-merged.mount: Deactivated successfully.
Nov 25 20:06:28 compute-0 podman[83853]: 2025-11-25 20:06:28.485658252 +0000 UTC m=+1.564235678 container remove e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc (image=quay.io/ceph/ceph:v18, name=agitated_austin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:28 compute-0 systemd[1]: libpod-conmon-e2834dcc49f2689a8f8024c0759f0e3de521b28046e062e64353b4d8b44722dc.scope: Deactivated successfully.
Nov 25 20:06:28 compute-0 sudo[83796]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 sudo[84289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:06:28 compute-0 sudo[84289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:28 compute-0 sudo[84289]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 20:06:28 compute-0 sudo[84318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:28 compute-0 sudo[84318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:28 compute-0 sudo[84318]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 sudo[84343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:28 compute-0 sudo[84343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:28 compute-0 sudo[84343]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 sudo[84370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:28 compute-0 sudo[84370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:28 compute-0 sudo[84370]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 20:06:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:28 compute-0 sudo[84446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdptjnftwcsmqqsxraxcpzrwodcekfpd ; /usr/bin/python3'
Nov 25 20:06:28 compute-0 sudo[84446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:06:28 compute-0 sudo[84410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:28 compute-0 sudo[84410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:29 compute-0 python3[84452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.135747439 +0000 UTC m=+0.058009889 container create d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd (image=quay.io/ceph/ceph:v18, name=cool_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:06:29 compute-0 systemd[1]: Started libpod-conmon-d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd.scope.
Nov 25 20:06:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9801b7c027cb30865c9c2993c79f9e52e2b0afec79bec405130063cfa359dc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9801b7c027cb30865c9c2993c79f9e52e2b0afec79bec405130063cfa359dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9801b7c027cb30865c9c2993c79f9e52e2b0afec79bec405130063cfa359dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.113089803 +0000 UTC m=+0.035352243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.218921582 +0000 UTC m=+0.072601876 container create 2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.224156445 +0000 UTC m=+0.146418885 container init d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd (image=quay.io/ceph/ceph:v18, name=cool_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.236246093 +0000 UTC m=+0.158508503 container start d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd (image=quay.io/ceph/ceph:v18, name=cool_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.258938501 +0000 UTC m=+0.181200951 container attach d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd (image=quay.io/ceph/ceph:v18, name=cool_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:06:29 compute-0 systemd[1]: Started libpod-conmon-2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75.scope.
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.186386387 +0000 UTC m=+0.040066771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:29 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'crash'
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.320563167 +0000 UTC m=+0.174243551 container init 2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.33055309 +0000 UTC m=+0.184233414 container start 2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.334725313 +0000 UTC m=+0.188405627 container attach 2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:29 compute-0 kind_kirch[84509]: 167 167
Nov 25 20:06:29 compute-0 systemd[1]: libpod-2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75.scope: Deactivated successfully.
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.340096209 +0000 UTC m=+0.193776513 container died 2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b4f1e353ca8c9a5ec78df1212bf0154bd4a56d6a3be2d73c5f288e8cb62df06-merged.mount: Deactivated successfully.
Nov 25 20:06:29 compute-0 podman[84487]: 2025-11-25 20:06:29.392021562 +0000 UTC m=+0.245701886 container remove 2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:29 compute-0 systemd[1]: libpod-conmon-2ea257d1b7f7b6adcbaf72658381ad2a96094bb840e4aa90f20713419ebf3c75.scope: Deactivated successfully.
Nov 25 20:06:29 compute-0 sudo[84410]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:29 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hdjasd (unknown last config time)...
Nov 25 20:06:29 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hdjasd (unknown last config time)...
Nov 25 20:06:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hdjasd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 20:06:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hdjasd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 20:06:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 20:06:29 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:06:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:29 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:29 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hdjasd on compute-0
Nov 25 20:06:29 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hdjasd on compute-0
Nov 25 20:06:29 compute-0 sudo[84528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:29 compute-0 sudo[84528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:29 compute-0 sudo[84528]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:29 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:29.596+0000 7f611fe09140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 20:06:29 compute-0 ceph-mgr[83748]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 20:06:29 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'dashboard'
Nov 25 20:06:29 compute-0 sudo[84572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:29 compute-0 sudo[84572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:29 compute-0 sudo[84572]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:29 compute-0 sudo[84597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:29 compute-0 sudo[84597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:29 compute-0 sudo[84597]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:29 compute-0 sudo[84622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:29 compute-0 sudo[84622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 20:06:29 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921147801' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:06:29 compute-0 cool_cray[84500]: 
Nov 25 20:06:29 compute-0 cool_cray[84500]: {"fsid":"712dd110-763a-5547-8ef7-acda1414fdce","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T20:05:07.852558+0000","services":{}},"progress_events":{}}
Nov 25 20:06:29 compute-0 systemd[1]: libpod-d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd.scope: Deactivated successfully.
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.850141515 +0000 UTC m=+0.772403915 container died d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd (image=quay.io/ceph/ceph:v18, name=cool_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Added host compute-0
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Saving service mon spec with placement compute-0
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Saving service mgr spec with placement compute-0
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Saving service osd.default_drive_group spec with placement compute-0
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 20:06:29 compute-0 ceph-mon[75144]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 20:06:29 compute-0 ceph-mon[75144]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hdjasd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 20:06:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:06:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:29 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/921147801' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-db9801b7c027cb30865c9c2993c79f9e52e2b0afec79bec405130063cfa359dc-merged.mount: Deactivated successfully.
Nov 25 20:06:29 compute-0 podman[84466]: 2025-11-25 20:06:29.915902775 +0000 UTC m=+0.838165185 container remove d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd (image=quay.io/ceph/ceph:v18, name=cool_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:29 compute-0 systemd[1]: libpod-conmon-d488efb63d8330591ce199b43bb5e979720da0a0223cd39f51b3c8cf481a11dd.scope: Deactivated successfully.
Nov 25 20:06:29 compute-0 sudo[84446]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.078234962 +0000 UTC m=+0.034742257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.238970104 +0000 UTC m=+0.195477369 container create 635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 20:06:30 compute-0 systemd[1]: Started libpod-conmon-635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b.scope.
Nov 25 20:06:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.32958824 +0000 UTC m=+0.286095585 container init 635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.338584285 +0000 UTC m=+0.295091550 container start 635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:06:30 compute-0 silly_williams[84694]: 167 167
Nov 25 20:06:30 compute-0 systemd[1]: libpod-635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b.scope: Deactivated successfully.
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.343623662 +0000 UTC m=+0.300131017 container attach 635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.344010043 +0000 UTC m=+0.300517368 container died 635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-396b1bb3923f756689018a47ddeb70454146b7427521f0d2b105451baa681841-merged.mount: Deactivated successfully.
Nov 25 20:06:30 compute-0 podman[84677]: 2025-11-25 20:06:30.386608831 +0000 UTC m=+0.343116106 container remove 635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:06:30 compute-0 systemd[1]: libpod-conmon-635db091cc3c3f15c01ab933948e36707edef2abb92ae23dfb3f9f6ed4b7708b.scope: Deactivated successfully.
Nov 25 20:06:30 compute-0 sudo[84622]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:30 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:30 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:30 compute-0 sudo[84713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:30 compute-0 sudo[84713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:30 compute-0 sudo[84713]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:30 compute-0 sudo[84738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:30 compute-0 sudo[84738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:30 compute-0 sudo[84738]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:30 compute-0 sudo[84763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:30 compute-0 sudo[84763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:30 compute-0 sudo[84763]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:30 compute-0 sudo[84788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:06:30 compute-0 sudo[84788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:30 compute-0 ceph-mon[75144]: Reconfiguring mgr.compute-0.hdjasd (unknown last config time)...
Nov 25 20:06:30 compute-0 ceph-mon[75144]: Reconfiguring daemon mgr.compute-0.hdjasd on compute-0
Nov 25 20:06:30 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:30 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'devicehealth'
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:31 compute-0 podman[84886]: 2025-11-25 20:06:31.263723875 +0000 UTC m=+0.067865077 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:06:31 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:31.270+0000 7f611fe09140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 20:06:31 compute-0 ceph-mgr[83748]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 20:06:31 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 20:06:31 compute-0 podman[84886]: 2025-11-25 20:06:31.349560811 +0000 UTC m=+0.153702023 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:06:31 compute-0 sudo[84788]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 20:06:31 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 20:06:31 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]:   from numpy import show_config as show_numpy_config
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:06:31 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:31.801+0000 7f611fe09140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 20:06:31 compute-0 ceph-mgr[83748]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 20:06:31 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'influx'
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev da89cd7c-2109-49df-9072-2a2ec3a17a44 does not exist
Nov 25 20:06:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 20:06:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev ead41db7-78cd-4654-8ea6-a4dbcc137ffd (Updating mgr deployment (-1 -> 1))
Nov 25 20:06:31 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.cvdjmy from compute-0 -- ports [8765]
Nov 25 20:06:31 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.cvdjmy from compute-0 -- ports [8765]
Nov 25 20:06:31 compute-0 sudo[84975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:31 compute-0 ceph-mon[75144]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 sudo[84975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:31 compute-0 sudo[84975]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:31 compute-0 sudo[85000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:31 compute-0 sudo[85000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:31 compute-0 sudo[85000]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:32 compute-0 sudo[85025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:32 compute-0 sudo[85025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:32 compute-0 sudo[85025]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:32 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:32.039+0000 7f611fe09140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 20:06:32 compute-0 ceph-mgr[83748]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 20:06:32 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'insights'
Nov 25 20:06:32 compute-0 sudo[85050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 712dd110-763a-5547-8ef7-acda1414fdce --name mgr.compute-0.cvdjmy --force --tcp-ports 8765
Nov 25 20:06:32 compute-0 sudo[85050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:32 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'iostat'
Nov 25 20:06:32 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.cvdjmy for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:06:32 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy[83744]: 2025-11-25T20:06:32.511+0000 7f611fe09140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 20:06:32 compute-0 ceph-mgr[83748]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 20:06:32 compute-0 ceph-mgr[83748]: mgr[py] Loading python module 'k8sevents'
Nov 25 20:06:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:32 compute-0 podman[85143]: 2025-11-25 20:06:32.752653341 +0000 UTC m=+0.098572249 container died ced4f25e213fa5fb0bf8b7048d60b781138f170b5d8274b0f44abce7c55ad641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d6bc075466eeda59be010829720fd84b5eb4612a3aa3c9560b3d7d757e2c6c8-merged.mount: Deactivated successfully.
Nov 25 20:06:32 compute-0 podman[85143]: 2025-11-25 20:06:32.806336992 +0000 UTC m=+0.152255870 container remove ced4f25e213fa5fb0bf8b7048d60b781138f170b5d8274b0f44abce7c55ad641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:32 compute-0 bash[85143]: ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-cvdjmy
Nov 25 20:06:32 compute-0 systemd[1]: ceph-712dd110-763a-5547-8ef7-acda1414fdce@mgr.compute-0.cvdjmy.service: Main process exited, code=exited, status=143/n/a
Nov 25 20:06:32 compute-0 ceph-mon[75144]: Removing daemon mgr.compute-0.cvdjmy from compute-0 -- ports [8765]
Nov 25 20:06:32 compute-0 systemd[1]: ceph-712dd110-763a-5547-8ef7-acda1414fdce@mgr.compute-0.cvdjmy.service: Failed with result 'exit-code'.
Nov 25 20:06:32 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.cvdjmy for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:06:32 compute-0 systemd[1]: ceph-712dd110-763a-5547-8ef7-acda1414fdce@mgr.compute-0.cvdjmy.service: Consumed 7.216s CPU time.
Nov 25 20:06:32 compute-0 systemd[1]: Reloading.
Nov 25 20:06:33 compute-0 systemd-rc-local-generator[85224]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:33 compute-0 systemd-sysv-generator[85228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:33 compute-0 sudo[85050]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:33 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.cvdjmy
Nov 25 20:06:33 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.cvdjmy
Nov 25 20:06:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.cvdjmy"} v 0) v1
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cvdjmy"}]: dispatch
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cvdjmy"}]': finished
Nov 25 20:06:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:33 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev ead41db7-78cd-4654-8ea6-a4dbcc137ffd (Updating mgr deployment (-1 -> 1))
Nov 25 20:06:33 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event ead41db7-78cd-4654-8ea6-a4dbcc137ffd (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 25 20:06:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:33 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 11b58a89-1774-4981-b563-550b43db712d does not exist
Nov 25 20:06:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:06:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:06:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:33 compute-0 sudo[85237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:33 compute-0 sudo[85237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:33 compute-0 sudo[85237]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:33 compute-0 sudo[85262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:33 compute-0 sudo[85262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:33 compute-0 sudo[85262]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:33 compute-0 sudo[85287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:33 compute-0 sudo[85287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:33 compute-0 sudo[85287]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:33 compute-0 sudo[85312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:06:33 compute-0 sudo[85312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:33 compute-0 ceph-mon[75144]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cvdjmy"}]: dispatch
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cvdjmy"}]': finished
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:06:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.122182727 +0000 UTC m=+0.056066501 container create ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:06:34 compute-0 systemd[1]: Started libpod-conmon-ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36.scope.
Nov 25 20:06:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.104049921 +0000 UTC m=+0.037933775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.204483134 +0000 UTC m=+0.138366968 container init ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.211675106 +0000 UTC m=+0.145558920 container start ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:34 compute-0 sleepy_yalow[85394]: 167 167
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.216012655 +0000 UTC m=+0.149896469 container attach ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:06:34 compute-0 systemd[1]: libpod-ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36.scope: Deactivated successfully.
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.219649123 +0000 UTC m=+0.153532927 container died ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9531e72be587cf3d999dc3089791ecb93a193b7e59c51a1e3494b9c410a3caf4-merged.mount: Deactivated successfully.
Nov 25 20:06:34 compute-0 podman[85377]: 2025-11-25 20:06:34.258897625 +0000 UTC m=+0.192781439 container remove ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:34 compute-0 systemd[1]: libpod-conmon-ee65721d8e978ae359f78c752c2e71b4af9dae169cda4f4dc7b28c8240907a36.scope: Deactivated successfully.
Nov 25 20:06:34 compute-0 podman[85418]: 2025-11-25 20:06:34.4360963 +0000 UTC m=+0.048556388 container create 5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:06:34 compute-0 systemd[1]: Started libpod-conmon-5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16.scope.
Nov 25 20:06:34 compute-0 podman[85418]: 2025-11-25 20:06:34.415210012 +0000 UTC m=+0.027670090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab3573339d39a068dcb5b1eac810ef7a0c2a4933287249fe4bf0612485f784e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab3573339d39a068dcb5b1eac810ef7a0c2a4933287249fe4bf0612485f784e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab3573339d39a068dcb5b1eac810ef7a0c2a4933287249fe4bf0612485f784e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab3573339d39a068dcb5b1eac810ef7a0c2a4933287249fe4bf0612485f784e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab3573339d39a068dcb5b1eac810ef7a0c2a4933287249fe4bf0612485f784e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:34 compute-0 podman[85418]: 2025-11-25 20:06:34.542613644 +0000 UTC m=+0.155073732 container init 5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:34 compute-0 podman[85418]: 2025-11-25 20:06:34.55460702 +0000 UTC m=+0.167067078 container start 5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_borg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:06:34 compute-0 podman[85418]: 2025-11-25 20:06:34.558555026 +0000 UTC m=+0.171015094 container attach 5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 20:06:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:34 compute-0 ceph-mon[75144]: Removing key for mgr.compute-0.cvdjmy
Nov 25 20:06:35 compute-0 ecstatic_borg[85435]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:06:35 compute-0 ecstatic_borg[85435]: --> relative data size: 1.0
Nov 25 20:06:35 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 20:06:35 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f0a2211a-2b5d-4914-9a66-9743102e8fa4
Nov 25 20:06:35 compute-0 ceph-mon[75144]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4"} v 0) v1
Nov 25 20:06:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2956979744' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4"}]: dispatch
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:06:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2956979744' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4"}]': finished
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 20:06:36 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:36 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 20:06:36 compute-0 lvm[85497]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 20:06:36 compute-0 lvm[85497]: VG ceph_vg0 finished
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 25 20:06:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:36 compute-0 ceph-mgr[75443]: [progress INFO root] Writing back 3 completed events
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 20:06:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 20:06:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621371647' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]:  stderr: got monmap epoch 1
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: --> Creating keyring file for osd.0
Nov 25 20:06:36 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2956979744' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4"}]: dispatch
Nov 25 20:06:36 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2956979744' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4"}]': finished
Nov 25 20:06:36 compute-0 ceph-mon[75144]: osdmap e4: 1 total, 0 up, 1 in
Nov 25 20:06:36 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:36 compute-0 ceph-mon[75144]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:36 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:36 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/621371647' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 25 20:06:36 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f0a2211a-2b5d-4914-9a66-9743102e8fa4 --setuser ceph --setgroup ceph
Nov 25 20:06:37 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 20:06:37 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 20:06:37 compute-0 ceph-mon[75144]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 20:06:37 compute-0 ceph-mon[75144]: Cluster is now healthy
Nov 25 20:06:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:38 compute-0 ceph-mon[75144]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:36.992+0000 7fddcd8ef740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:36.992+0000 7fddcd8ef740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:36.992+0000 7fddcd8ef740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:36.993+0000 7fddcd8ef740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7e844079-8f15-40a1-8d48-4a531b96b291
Nov 25 20:06:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7e844079-8f15-40a1-8d48-4a531b96b291"} v 0) v1
Nov 25 20:06:39 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151066814' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e844079-8f15-40a1-8d48-4a531b96b291"}]: dispatch
Nov 25 20:06:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 25 20:06:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:06:39 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151066814' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e844079-8f15-40a1-8d48-4a531b96b291"}]': finished
Nov 25 20:06:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 20:06:39 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 25 20:06:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:39 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:06:39 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:39 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:39 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 20:06:39 compute-0 lvm[86430]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 20:06:39 compute-0 lvm[86430]: VG ceph_vg1 finished
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:39 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 25 20:06:39 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/151066814' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e844079-8f15-40a1-8d48-4a531b96b291"}]: dispatch
Nov 25 20:06:39 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/151066814' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e844079-8f15-40a1-8d48-4a531b96b291"}]': finished
Nov 25 20:06:39 compute-0 ceph-mon[75144]: osdmap e5: 2 total, 0 up, 2 in
Nov 25 20:06:39 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:39 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 20:06:40 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483900513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 20:06:40 compute-0 ecstatic_borg[85435]:  stderr: got monmap epoch 1
Nov 25 20:06:40 compute-0 ecstatic_borg[85435]: --> Creating keyring file for osd.1
Nov 25 20:06:40 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 25 20:06:40 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 25 20:06:40 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 7e844079-8f15-40a1-8d48-4a531b96b291 --setuser ceph --setgroup ceph
Nov 25 20:06:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:40 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1483900513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 20:06:40 compute-0 ceph-mon[75144]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:42 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:40.503+0000 7f9e8e3b7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:42 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:40.503+0000 7f9e8e3b7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:42 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:40.503+0000 7f9e8e3b7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:42 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:40.503+0000 7f9e8e3b7740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 25 20:06:42 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 21cf5470-2713-4831-8402-4fccd506c64e
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "21cf5470-2713-4831-8402-4fccd506c64e"} v 0) v1
Nov 25 20:06:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/824730670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "21cf5470-2713-4831-8402-4fccd506c64e"}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:06:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/824730670' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "21cf5470-2713-4831-8402-4fccd506c64e"}]': finished
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 25 20:06:43 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:06:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:06:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:43 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:06:43 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:06:43 compute-0 lvm[87365]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 20:06:43 compute-0 lvm[87365]: VG ceph_vg2 finished
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 20:06:43 compute-0 ceph-mon[75144]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:43 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/824730670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "21cf5470-2713-4831-8402-4fccd506c64e"}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/824730670' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "21cf5470-2713-4831-8402-4fccd506c64e"}]': finished
Nov 25 20:06:43 compute-0 ceph-mon[75144]: osdmap e6: 3 total, 0 up, 3 in
Nov 25 20:06:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 20:06:43 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 25 20:06:44 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 20:06:44 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841060259' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 20:06:44 compute-0 ecstatic_borg[85435]:  stderr: got monmap epoch 1
Nov 25 20:06:44 compute-0 ecstatic_borg[85435]: --> Creating keyring file for osd.2
Nov 25 20:06:44 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 25 20:06:44 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 25 20:06:44 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 21cf5470-2713-4831-8402-4fccd506c64e --setuser ceph --setgroup ceph
Nov 25 20:06:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:44 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2841060259' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 20:06:45 compute-0 ceph-mon[75144]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:44.358+0000 7f47e5b95740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:44.359+0000 7f47e5b95740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:44.359+0000 7f47e5b95740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]:  stderr: 2025-11-25T20:06:44.359+0000 7f47e5b95740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 20:06:46 compute-0 ecstatic_borg[85435]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 25 20:06:46 compute-0 systemd[1]: libpod-5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16.scope: Deactivated successfully.
Nov 25 20:06:46 compute-0 systemd[1]: libpod-5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16.scope: Consumed 6.615s CPU time.
Nov 25 20:06:46 compute-0 podman[85418]: 2025-11-25 20:06:46.991510377 +0000 UTC m=+12.603970425 container died 5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab3573339d39a068dcb5b1eac810ef7a0c2a4933287249fe4bf0612485f784e-merged.mount: Deactivated successfully.
Nov 25 20:06:47 compute-0 podman[85418]: 2025-11-25 20:06:47.062339975 +0000 UTC m=+12.674800063 container remove 5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:47 compute-0 systemd[1]: libpod-conmon-5628499f73f0911ed9d3be38663c4ac8275a1571d77dff088531b66d3bd57a16.scope: Deactivated successfully.
Nov 25 20:06:47 compute-0 sudo[85312]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:47 compute-0 sudo[88282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:47 compute-0 sudo[88282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:47 compute-0 sudo[88282]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:47 compute-0 sudo[88307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:47 compute-0 sudo[88307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:47 compute-0 sudo[88307]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:47 compute-0 sudo[88332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:47 compute-0 sudo[88332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:47 compute-0 sudo[88332]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:47 compute-0 sudo[88357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:06:47 compute-0 sudo[88357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:47 compute-0 ceph-mon[75144]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:47 compute-0 podman[88421]: 2025-11-25 20:06:47.826667332 +0000 UTC m=+0.052230226 container create 832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:47 compute-0 systemd[1]: Started libpod-conmon-832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea.scope.
Nov 25 20:06:47 compute-0 podman[88421]: 2025-11-25 20:06:47.800683973 +0000 UTC m=+0.026246927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:47 compute-0 podman[88421]: 2025-11-25 20:06:47.930655311 +0000 UTC m=+0.156218205 container init 832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:06:47 compute-0 podman[88421]: 2025-11-25 20:06:47.942704758 +0000 UTC m=+0.168267622 container start 832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:06:47 compute-0 podman[88421]: 2025-11-25 20:06:47.945648185 +0000 UTC m=+0.171211069 container attach 832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:47 compute-0 pensive_saha[88437]: 167 167
Nov 25 20:06:47 compute-0 systemd[1]: libpod-832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea.scope: Deactivated successfully.
Nov 25 20:06:47 compute-0 conmon[88437]: conmon 832feb53ff9e4f775ae8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea.scope/container/memory.events
Nov 25 20:06:47 compute-0 podman[88421]: 2025-11-25 20:06:47.954331552 +0000 UTC m=+0.179894456 container died 832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac818347e14b3be2a18c637397f82bd31b825bca64a81e18dde66ca5f36a1401-merged.mount: Deactivated successfully.
Nov 25 20:06:48 compute-0 podman[88421]: 2025-11-25 20:06:48.007592319 +0000 UTC m=+0.233155213 container remove 832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:48 compute-0 systemd[1]: libpod-conmon-832feb53ff9e4f775ae8055d1f44d5bb98e394914218f901bc06cf7220b746ea.scope: Deactivated successfully.
Nov 25 20:06:48 compute-0 podman[88461]: 2025-11-25 20:06:48.207189128 +0000 UTC m=+0.071139107 container create 270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ritchie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:06:48 compute-0 systemd[1]: Started libpod-conmon-270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb.scope.
Nov 25 20:06:48 compute-0 podman[88461]: 2025-11-25 20:06:48.182568539 +0000 UTC m=+0.046518518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01887af98f1dbf587b789180e0b99d32b743ad2e11ddcb3c155b513c848eeac8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01887af98f1dbf587b789180e0b99d32b743ad2e11ddcb3c155b513c848eeac8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01887af98f1dbf587b789180e0b99d32b743ad2e11ddcb3c155b513c848eeac8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01887af98f1dbf587b789180e0b99d32b743ad2e11ddcb3c155b513c848eeac8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:48 compute-0 podman[88461]: 2025-11-25 20:06:48.310828626 +0000 UTC m=+0.174778615 container init 270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:48 compute-0 podman[88461]: 2025-11-25 20:06:48.324101099 +0000 UTC m=+0.188051048 container start 270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:48 compute-0 podman[88461]: 2025-11-25 20:06:48.327435028 +0000 UTC m=+0.191385007 container attach 270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ritchie, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:06:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:49 compute-0 magical_ritchie[88477]: {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:     "0": [
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:         {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "devices": [
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "/dev/loop3"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             ],
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_name": "ceph_lv0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_size": "21470642176",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "name": "ceph_lv0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "tags": {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cluster_name": "ceph",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.crush_device_class": "",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.encrypted": "0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osd_id": "0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.type": "block",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.vdo": "0"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             },
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "type": "block",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "vg_name": "ceph_vg0"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:         }
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:     ],
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:     "1": [
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:         {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "devices": [
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "/dev/loop4"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             ],
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_name": "ceph_lv1",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_size": "21470642176",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "name": "ceph_lv1",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "tags": {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cluster_name": "ceph",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.crush_device_class": "",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.encrypted": "0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osd_id": "1",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.type": "block",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.vdo": "0"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             },
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "type": "block",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "vg_name": "ceph_vg1"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:         }
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:     ],
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:     "2": [
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:         {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "devices": [
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "/dev/loop5"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             ],
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_name": "ceph_lv2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_size": "21470642176",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "name": "ceph_lv2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "tags": {
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.cluster_name": "ceph",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.crush_device_class": "",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.encrypted": "0",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osd_id": "2",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.type": "block",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:                 "ceph.vdo": "0"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             },
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "type": "block",
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:             "vg_name": "ceph_vg2"
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:         }
Nov 25 20:06:49 compute-0 magical_ritchie[88477]:     ]
Nov 25 20:06:49 compute-0 magical_ritchie[88477]: }
Nov 25 20:06:49 compute-0 systemd[1]: libpod-270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb.scope: Deactivated successfully.
Nov 25 20:06:49 compute-0 podman[88486]: 2025-11-25 20:06:49.184229594 +0000 UTC m=+0.037830522 container died 270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-01887af98f1dbf587b789180e0b99d32b743ad2e11ddcb3c155b513c848eeac8-merged.mount: Deactivated successfully.
Nov 25 20:06:49 compute-0 podman[88486]: 2025-11-25 20:06:49.226920917 +0000 UTC m=+0.080521835 container remove 270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:49 compute-0 systemd[1]: libpod-conmon-270102d71da9261727a409d676c86203341be925af5f4f0f67648ace673d91fb.scope: Deactivated successfully.
Nov 25 20:06:49 compute-0 sudo[88357]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 25 20:06:49 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 20:06:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:49 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:49 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 25 20:06:49 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 25 20:06:49 compute-0 sudo[88499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:49 compute-0 sudo[88499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:49 compute-0 sudo[88499]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:49 compute-0 sudo[88524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:49 compute-0 sudo[88524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:49 compute-0 sudo[88524]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:49 compute-0 sudo[88549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:49 compute-0 sudo[88549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:49 compute-0 sudo[88549]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:49 compute-0 sudo[88574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:49 compute-0 sudo[88574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:49 compute-0 ceph-mon[75144]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:49 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 20:06:49 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.03429073 +0000 UTC m=+0.046364003 container create 90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:06:50 compute-0 systemd[1]: Started libpod-conmon-90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d.scope.
Nov 25 20:06:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.013276558 +0000 UTC m=+0.025349851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.126647134 +0000 UTC m=+0.138720447 container init 90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.1369803 +0000 UTC m=+0.149053583 container start 90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.140827664 +0000 UTC m=+0.152901047 container attach 90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:50 compute-0 nostalgic_jepsen[88655]: 167 167
Nov 25 20:06:50 compute-0 systemd[1]: libpod-90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d.scope: Deactivated successfully.
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.145780171 +0000 UTC m=+0.157853464 container died 90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91b9cf0608fb93279c1e5b7bd9e5409cc6d5eba273b52e2022afb712442afcb-merged.mount: Deactivated successfully.
Nov 25 20:06:50 compute-0 podman[88639]: 2025-11-25 20:06:50.194025609 +0000 UTC m=+0.206098922 container remove 90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:50 compute-0 systemd[1]: libpod-conmon-90b7adedfe50c912085d93beb671886faebdf346e640864606eae5690fc6f77d.scope: Deactivated successfully.
Nov 25 20:06:50 compute-0 podman[88685]: 2025-11-25 20:06:50.546751871 +0000 UTC m=+0.037666975 container create bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:06:50 compute-0 systemd[1]: Started libpod-conmon-bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e.scope.
Nov 25 20:06:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df293b61fd260704ed8000a021d526266b1951fb77ad2fee0ba06a39480440b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df293b61fd260704ed8000a021d526266b1951fb77ad2fee0ba06a39480440b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df293b61fd260704ed8000a021d526266b1951fb77ad2fee0ba06a39480440b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df293b61fd260704ed8000a021d526266b1951fb77ad2fee0ba06a39480440b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df293b61fd260704ed8000a021d526266b1951fb77ad2fee0ba06a39480440b3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:50 compute-0 podman[88685]: 2025-11-25 20:06:50.53015577 +0000 UTC m=+0.021070894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:50 compute-0 podman[88685]: 2025-11-25 20:06:50.642771975 +0000 UTC m=+0.133687109 container init bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:50 compute-0 podman[88685]: 2025-11-25 20:06:50.662313073 +0000 UTC m=+0.153228167 container start bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:50 compute-0 podman[88685]: 2025-11-25 20:06:50.665742505 +0000 UTC m=+0.156657659 container attach bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:50 compute-0 ceph-mon[75144]: Deploying daemon osd.0 on compute-0
Nov 25 20:06:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:51 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test[88701]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 20:06:51 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test[88701]:                             [--no-systemd] [--no-tmpfs]
Nov 25 20:06:51 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test[88701]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 20:06:51 compute-0 systemd[1]: libpod-bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e.scope: Deactivated successfully.
Nov 25 20:06:51 compute-0 podman[88685]: 2025-11-25 20:06:51.293933522 +0000 UTC m=+0.784848676 container died bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-df293b61fd260704ed8000a021d526266b1951fb77ad2fee0ba06a39480440b3-merged.mount: Deactivated successfully.
Nov 25 20:06:51 compute-0 podman[88685]: 2025-11-25 20:06:51.346112757 +0000 UTC m=+0.837027861 container remove bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:51 compute-0 systemd[1]: libpod-conmon-bbcf1c35e50b89f47e03e033b48a6e058405a3fae30f66c37ec37eea486f093e.scope: Deactivated successfully.
Nov 25 20:06:51 compute-0 systemd[1]: Reloading.
Nov 25 20:06:51 compute-0 systemd-rc-local-generator[88760]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:51 compute-0 systemd-sysv-generator[88766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:51 compute-0 ceph-mon[75144]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:52 compute-0 systemd[1]: Reloading.
Nov 25 20:06:52 compute-0 systemd-rc-local-generator[88802]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:52 compute-0 systemd-sysv-generator[88806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:52 compute-0 systemd[1]: Starting Ceph osd.0 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:06:52 compute-0 podman[88862]: 2025-11-25 20:06:52.641634611 +0000 UTC m=+0.048864507 container create d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:06:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615557b170300e3642000dee0992738408ce15c5b40ca67b921cc17b491a1a36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615557b170300e3642000dee0992738408ce15c5b40ca67b921cc17b491a1a36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615557b170300e3642000dee0992738408ce15c5b40ca67b921cc17b491a1a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615557b170300e3642000dee0992738408ce15c5b40ca67b921cc17b491a1a36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615557b170300e3642000dee0992738408ce15c5b40ca67b921cc17b491a1a36/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:52 compute-0 podman[88862]: 2025-11-25 20:06:52.709043187 +0000 UTC m=+0.116273123 container init d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:06:52 compute-0 podman[88862]: 2025-11-25 20:06:52.618903478 +0000 UTC m=+0.026133474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:52 compute-0 podman[88862]: 2025-11-25 20:06:52.723295019 +0000 UTC m=+0.130524925 container start d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:06:52 compute-0 podman[88862]: 2025-11-25 20:06:52.727339679 +0000 UTC m=+0.134569605 container attach d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:06:53 compute-0 ceph-mon[75144]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 20:06:53 compute-0 bash[88862]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 20:06:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 20:06:53 compute-0 bash[88862]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 20:06:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 20:06:53 compute-0 bash[88862]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 20:06:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 20:06:53 compute-0 bash[88862]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 20:06:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:53 compute-0 bash[88862]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:53 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 20:06:53 compute-0 bash[88862]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 20:06:54 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate[88877]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 20:06:54 compute-0 bash[88862]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 20:06:54 compute-0 systemd[1]: libpod-d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf.scope: Deactivated successfully.
Nov 25 20:06:54 compute-0 podman[88862]: 2025-11-25 20:06:54.04539961 +0000 UTC m=+1.452629536 container died d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:06:54 compute-0 systemd[1]: libpod-d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf.scope: Consumed 1.341s CPU time.
Nov 25 20:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-615557b170300e3642000dee0992738408ce15c5b40ca67b921cc17b491a1a36-merged.mount: Deactivated successfully.
Nov 25 20:06:54 compute-0 podman[88862]: 2025-11-25 20:06:54.120881575 +0000 UTC m=+1.528111471 container remove d6084310ad6dbed7e4c95b7ededa816bd5e9ee856bfeec2614d4bc93a393e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:54 compute-0 podman[89065]: 2025-11-25 20:06:54.41590755 +0000 UTC m=+0.065802120 container create 64635db2efad845f51a0c9bdaa645bf938d08d90d470093aaaf5c48de5e978ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739fdae4c2f6876f484dfec3e6db5cfb26e9f37ad229a9a84ccc5f8afd81b371/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739fdae4c2f6876f484dfec3e6db5cfb26e9f37ad229a9a84ccc5f8afd81b371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739fdae4c2f6876f484dfec3e6db5cfb26e9f37ad229a9a84ccc5f8afd81b371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739fdae4c2f6876f484dfec3e6db5cfb26e9f37ad229a9a84ccc5f8afd81b371/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739fdae4c2f6876f484dfec3e6db5cfb26e9f37ad229a9a84ccc5f8afd81b371/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:54 compute-0 podman[89065]: 2025-11-25 20:06:54.390971492 +0000 UTC m=+0.040866102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:54 compute-0 podman[89065]: 2025-11-25 20:06:54.493763154 +0000 UTC m=+0.143657754 container init 64635db2efad845f51a0c9bdaa645bf938d08d90d470093aaaf5c48de5e978ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:06:54 compute-0 podman[89065]: 2025-11-25 20:06:54.510693175 +0000 UTC m=+0.160587745 container start 64635db2efad845f51a0c9bdaa645bf938d08d90d470093aaaf5c48de5e978ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:54 compute-0 bash[89065]: 64635db2efad845f51a0c9bdaa645bf938d08d90d470093aaaf5c48de5e978ad
Nov 25 20:06:54 compute-0 systemd[1]: Started Ceph osd.0 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:06:54 compute-0 ceph-osd[89084]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:06:54 compute-0 ceph-osd[89084]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 20:06:54 compute-0 ceph-osd[89084]: pidfile_write: ignore empty --pid-file
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d3540f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d3540f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d3540f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d3540f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d36247800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d36247800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d36247800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d36247800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d36247800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 20:06:54 compute-0 sudo[88574]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 25 20:06:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 20:06:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:54 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 25 20:06:54 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 25 20:06:54 compute-0 sudo[89097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:54 compute-0 sudo[89097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:54 compute-0 sudo[89097]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:54 compute-0 sudo[89122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:06:54 compute-0 sudo[89122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:54 compute-0 sudo[89122]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:54 compute-0 sudo[89147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:06:54 compute-0 sudo[89147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:54 compute-0 sudo[89147]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:54 compute-0 ceph-osd[89084]: bdev(0x558d3540f800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 20:06:54 compute-0 sudo[89172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:06:54 compute-0 sudo[89172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 25 20:06:55 compute-0 ceph-osd[89084]: load: jerasure load: lrc 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.296612113 +0000 UTC m=+0.066885181 container create 29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:55 compute-0 systemd[1]: Started libpod-conmon-29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998.scope.
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.269227652 +0000 UTC m=+0.039500720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.40325827 +0000 UTC m=+0.173531348 container init 29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.415461711 +0000 UTC m=+0.185734779 container start 29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.419633655 +0000 UTC m=+0.189906713 container attach 29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:06:55 compute-0 optimistic_greider[89260]: 167 167
Nov 25 20:06:55 compute-0 systemd[1]: libpod-29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998.scope: Deactivated successfully.
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.426051035 +0000 UTC m=+0.196324093 container died 29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-43adc878c2bf5ee3f70a2644079ecb0f6917283de3f54c5a0e1461d1f55a1151-merged.mount: Deactivated successfully.
Nov 25 20:06:55 compute-0 podman[89244]: 2025-11-25 20:06:55.478971842 +0000 UTC m=+0.249244910 container remove 29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:55 compute-0 systemd[1]: libpod-conmon-29e38aa48dcd83ee94c9da8e2e164da8733be512b989a35b0f8fdd343b454998.scope: Deactivated successfully.
Nov 25 20:06:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 20:06:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:55 compute-0 ceph-mon[75144]: Deploying daemon osd.1 on compute-0
Nov 25 20:06:55 compute-0 ceph-mon[75144]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:55 compute-0 ceph-osd[89084]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 20:06:55 compute-0 ceph-osd[89084]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs mount
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs mount shared_bdev_used = 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Git sha 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: DB SUMMARY
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: DB Session ID:  Y6DXWBDI91VUPTANU4J6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                     Options.env: 0x558d36299c70
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                Options.info_log: 0x558d354968a0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.write_buffer_manager: 0x558d363a2460
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.row_cache: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                              Options.wal_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.wal_compression: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_background_jobs: 4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Compression algorithms supported:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kZSTD supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d354962c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d35496240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d35483090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d35496240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d35483090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d35496240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d35483090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2b79262a-8677-4746-8575-a613e5432c69
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215693998, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215694237, "job": 1, "event": "recovery_finished"}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: freelist init
Nov 25 20:06:55 compute-0 ceph-osd[89084]: freelist _read_cfg
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs umount
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 20:06:55 compute-0 podman[89490]: 2025-11-25 20:06:55.856031015 +0000 UTC m=+0.067394846 container create 62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:06:55 compute-0 systemd[1]: Started libpod-conmon-62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b.scope.
Nov 25 20:06:55 compute-0 podman[89490]: 2025-11-25 20:06:55.832091616 +0000 UTC m=+0.043455467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bdev(0x558d362c9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs mount
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluefs mount shared_bdev_used = 4718592
Nov 25 20:06:55 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Git sha 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: DB SUMMARY
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: DB Session ID:  Y6DXWBDI91VUPTANU4J7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                     Options.env: 0x558d3644a3f0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                Options.info_log: 0x558d3548cb40
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.write_buffer_manager: 0x558d363a26e0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.row_cache: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                              Options.wal_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.wal_compression: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_background_jobs: 4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Compression algorithms supported:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kZSTD supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d354831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ac099c4dc60a2684a4ed0fd50c1605cda8efcbc49b79e073eaca655efee05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d35483090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ac099c4dc60a2684a4ed0fd50c1605cda8efcbc49b79e073eaca655efee05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d35483090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ac099c4dc60a2684a4ed0fd50c1605cda8efcbc49b79e073eaca655efee05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ac099c4dc60a2684a4ed0fd50c1605cda8efcbc49b79e073eaca655efee05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ac099c4dc60a2684a4ed0fd50c1605cda8efcbc49b79e073eaca655efee05/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:           Options.merge_operator: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d3548d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d35483090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.compression: LZ4
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.num_levels: 7
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2b79262a-8677-4746-8575-a613e5432c69
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215958685, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215977006, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101215, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b79262a-8677-4746-8575-a613e5432c69", "db_session_id": "Y6DXWBDI91VUPTANU4J7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215980057, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101215, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b79262a-8677-4746-8575-a613e5432c69", "db_session_id": "Y6DXWBDI91VUPTANU4J7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215985387, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101215, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b79262a-8677-4746-8575-a613e5432c69", "db_session_id": "Y6DXWBDI91VUPTANU4J7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101215986514, "job": 1, "event": "recovery_finished"}
Nov 25 20:06:55 compute-0 ceph-osd[89084]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 20:06:55 compute-0 podman[89490]: 2025-11-25 20:06:55.986681973 +0000 UTC m=+0.198045794 container init 62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:06:55 compute-0 podman[89490]: 2025-11-25 20:06:55.998564114 +0000 UTC m=+0.209927925 container start 62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:56 compute-0 podman[89490]: 2025-11-25 20:06:56.001754129 +0000 UTC m=+0.213117930 container attach 62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:06:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558d355f1c00
Nov 25 20:06:56 compute-0 ceph-osd[89084]: rocksdb: DB pointer 0x558d3638ba00
Nov 25 20:06:56 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 20:06:56 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 25 20:06:56 compute-0 ceph-osd[89084]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 25 20:06:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:06:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:06:56 compute-0 ceph-osd[89084]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 20:06:56 compute-0 ceph-osd[89084]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 20:06:56 compute-0 ceph-osd[89084]: _get_class not permitted to load lua
Nov 25 20:06:56 compute-0 ceph-osd[89084]: _get_class not permitted to load sdk
Nov 25 20:06:56 compute-0 ceph-osd[89084]: _get_class not permitted to load test_remote_reads
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 load_pgs
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 load_pgs opened 0 pgs
Nov 25 20:06:56 compute-0 ceph-osd[89084]: osd.0 0 log_to_monitors true
Nov 25 20:06:56 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0[89080]: 2025-11-25T20:06:56.014+0000 7f0ba1d1f740 -1 osd.0 0 log_to_monitors true
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:06:56 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test[89506]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 20:06:56 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test[89506]:                             [--no-systemd] [--no-tmpfs]
Nov 25 20:06:56 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test[89506]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 20:06:56 compute-0 systemd[1]: libpod-62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b.scope: Deactivated successfully.
Nov 25 20:06:56 compute-0 podman[89490]: 2025-11-25 20:06:56.593278212 +0000 UTC m=+0.804642053 container died 62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:06:56 compute-0 ceph-mon[75144]: from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:06:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c90ac099c4dc60a2684a4ed0fd50c1605cda8efcbc49b79e073eaca655efee05-merged.mount: Deactivated successfully.
Nov 25 20:06:56 compute-0 podman[89490]: 2025-11-25 20:06:56.668824538 +0000 UTC m=+0.880188349 container remove 62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:06:56 compute-0 systemd[1]: libpod-conmon-62eab8f202f44d1a6c4b258b3eb7dfd9fef1bb8cca6ee14884a838a7696e011b.scope: Deactivated successfully.
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:06:56
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [balancer INFO root] No pools available
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:06:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:06:56 compute-0 systemd[1]: Reloading.
Nov 25 20:06:56 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 20:06:56 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 20:06:57 compute-0 systemd-sysv-generator[89791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:57 compute-0 systemd-rc-local-generator[89788]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:57 compute-0 systemd[1]: Reloading.
Nov 25 20:06:57 compute-0 systemd-rc-local-generator[89829]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:57 compute-0 systemd-sysv-generator[89833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:57 compute-0 systemd[1]: Starting Ceph osd.1 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:06:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0 done with init, starting boot process
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0 start_boot
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 20:06:57 compute-0 ceph-osd[89084]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 25 20:06:57 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:57 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:06:57 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:06:57 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mon[75144]: from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 20:06:57 compute-0 ceph-mon[75144]: osdmap e7: 3 total, 0 up, 3 in
Nov 25 20:06:57 compute-0 ceph-mon[75144]: from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mon[75144]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:57 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:06:57 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:06:57 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1624567662; not ready for session (expect reconnect)
Nov 25 20:06:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:57 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:57 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:57 compute-0 podman[89885]: 2025-11-25 20:06:57.868858256 +0000 UTC m=+0.077840375 container create bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:06:57 compute-0 podman[89885]: 2025-11-25 20:06:57.834544261 +0000 UTC m=+0.043526420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0923c7bb85c85923cb6486860605a6993e91897c12801f0d3fd281af78f17cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0923c7bb85c85923cb6486860605a6993e91897c12801f0d3fd281af78f17cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0923c7bb85c85923cb6486860605a6993e91897c12801f0d3fd281af78f17cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0923c7bb85c85923cb6486860605a6993e91897c12801f0d3fd281af78f17cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0923c7bb85c85923cb6486860605a6993e91897c12801f0d3fd281af78f17cb/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:57 compute-0 podman[89885]: 2025-11-25 20:06:57.972551576 +0000 UTC m=+0.181533745 container init bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:06:57 compute-0 podman[89885]: 2025-11-25 20:06:57.985620304 +0000 UTC m=+0.194602423 container start bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:06:57 compute-0 podman[89885]: 2025-11-25 20:06:57.992117366 +0000 UTC m=+0.201099445 container attach bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:06:58 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1624567662; not ready for session (expect reconnect)
Nov 25 20:06:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:58 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:58 compute-0 ceph-mon[75144]: from='osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 20:06:58 compute-0 ceph-mon[75144]: osdmap e8: 3 total, 0 up, 3 in
Nov 25 20:06:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:06:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:06:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 20:06:59 compute-0 bash[89885]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 20:06:59 compute-0 bash[89885]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 20:06:59 compute-0 bash[89885]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 20:06:59 compute-0 bash[89885]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:59 compute-0 bash[89885]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 20:06:59 compute-0 bash[89885]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 20:06:59 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate[89899]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 20:06:59 compute-0 bash[89885]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 20:06:59 compute-0 systemd[1]: libpod-bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71.scope: Deactivated successfully.
Nov 25 20:06:59 compute-0 systemd[1]: libpod-bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71.scope: Consumed 1.293s CPU time.
Nov 25 20:06:59 compute-0 podman[89885]: 2025-11-25 20:06:59.261059663 +0000 UTC m=+1.470041822 container died bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0923c7bb85c85923cb6486860605a6993e91897c12801f0d3fd281af78f17cb-merged.mount: Deactivated successfully.
Nov 25 20:06:59 compute-0 podman[89885]: 2025-11-25 20:06:59.372971787 +0000 UTC m=+1.581953906 container remove bbf14fee1adb6c67eeff2f07417f096ab7010249662e4faecf83d8524e0baf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:06:59 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1624567662; not ready for session (expect reconnect)
Nov 25 20:06:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:06:59 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:59 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:06:59 compute-0 ceph-mon[75144]: purged_snaps scrub starts
Nov 25 20:06:59 compute-0 ceph-mon[75144]: purged_snaps scrub ok
Nov 25 20:06:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:59 compute-0 ceph-mon[75144]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:06:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:06:59 compute-0 podman[90073]: 2025-11-25 20:06:59.710354985 +0000 UTC m=+0.079139954 container create 6d26f06c851a36416d627402aeccf2fc4acf7c69e01c3e80c13db20c2a780c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:06:59 compute-0 podman[90073]: 2025-11-25 20:06:59.680900013 +0000 UTC m=+0.049685042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418044e4b4d727a63386e26ff2babe5cdd64b21efe097912493c90bfbb7dd3dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418044e4b4d727a63386e26ff2babe5cdd64b21efe097912493c90bfbb7dd3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418044e4b4d727a63386e26ff2babe5cdd64b21efe097912493c90bfbb7dd3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418044e4b4d727a63386e26ff2babe5cdd64b21efe097912493c90bfbb7dd3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418044e4b4d727a63386e26ff2babe5cdd64b21efe097912493c90bfbb7dd3dd/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:06:59 compute-0 podman[90073]: 2025-11-25 20:06:59.844054243 +0000 UTC m=+0.212839262 container init 6d26f06c851a36416d627402aeccf2fc4acf7c69e01c3e80c13db20c2a780c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:06:59 compute-0 podman[90073]: 2025-11-25 20:06:59.864394745 +0000 UTC m=+0.233179684 container start 6d26f06c851a36416d627402aeccf2fc4acf7c69e01c3e80c13db20c2a780c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:06:59 compute-0 bash[90073]: 6d26f06c851a36416d627402aeccf2fc4acf7c69e01c3e80c13db20c2a780c19
Nov 25 20:06:59 compute-0 systemd[1]: Started Ceph osd.1 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:06:59 compute-0 ceph-osd[90092]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:06:59 compute-0 ceph-osd[90092]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 20:06:59 compute-0 ceph-osd[90092]: pidfile_write: ignore empty --pid-file
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x55905727f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x55905727f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x55905727f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x55905727f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x5590580c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x5590580c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x5590580c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x5590580c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 20:06:59 compute-0 ceph-osd[90092]: bdev(0x5590580c1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 20:06:59 compute-0 sudo[89172]: pam_unix(sudo:session): session closed for user root
Nov 25 20:06:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:06:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:06:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:06:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 25 20:06:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 20:06:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:06:59 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:06:59 compute-0 ceph-mgr[75443]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 25 20:06:59 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 25 20:07:00 compute-0 sudo[90105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:00 compute-0 sudo[90105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:00 compute-0 sudo[90105]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:00 compute-0 sudo[90152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcjjorwpfypbuebskosqygufykmbwfvc ; /usr/bin/python3'
Nov 25 20:07:00 compute-0 sudo[90152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:00 compute-0 sudo[90155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:00 compute-0 sudo[90155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:00 compute-0 sudo[90155]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x55905727f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 20:07:00 compute-0 sudo[90181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:00 compute-0 python3[90156]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:00 compute-0 sudo[90181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:00 compute-0 sudo[90181]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.333 iops: 6485.334 elapsed_sec: 0.463
Nov 25 20:07:00 compute-0 ceph-osd[89084]: log_channel(cluster) log [WRN] : OSD bench result of 6485.334060 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 0 waiting for initial osdmap
Nov 25 20:07:00 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0[89080]: 2025-11-25T20:07:00.265+0000 7f0b9dc9f640 -1 osd.0 0 waiting for initial osdmap
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 25 20:07:00 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-0[89080]: 2025-11-25T20:07:00.298+0000 7f0b992c7640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 25 20:07:00 compute-0 sudo[90211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 712dd110-763a-5547-8ef7-acda1414fdce
Nov 25 20:07:00 compute-0 sudo[90211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:00 compute-0 podman[90209]: 2025-11-25 20:07:00.329972009 +0000 UTC m=+0.069007224 container create be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6 (image=quay.io/ceph/ceph:v18, name=angry_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:07:00 compute-0 systemd[1]: Started libpod-conmon-be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6.scope.
Nov 25 20:07:00 compute-0 podman[90209]: 2025-11-25 20:07:00.297956021 +0000 UTC m=+0.036991266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d7975544d33f5a10404489b9e695fb67e0a217f6cfa9942b9296c63b88db23/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d7975544d33f5a10404489b9e695fb67e0a217f6cfa9942b9296c63b88db23/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d7975544d33f5a10404489b9e695fb67e0a217f6cfa9942b9296c63b88db23/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:00 compute-0 ceph-osd[90092]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 25 20:07:00 compute-0 ceph-osd[90092]: load: jerasure load: lrc 
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 20:07:00 compute-0 podman[90209]: 2025-11-25 20:07:00.498044395 +0000 UTC m=+0.237079620 container init be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6 (image=quay.io/ceph/ceph:v18, name=angry_euclid, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:07:00 compute-0 podman[90209]: 2025-11-25 20:07:00.505787994 +0000 UTC m=+0.244823189 container start be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6 (image=quay.io/ceph/ceph:v18, name=angry_euclid, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:00 compute-0 podman[90209]: 2025-11-25 20:07:00.508937258 +0000 UTC m=+0.247972453 container attach be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6 (image=quay.io/ceph/ceph:v18, name=angry_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:07:00 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1624567662; not ready for session (expect reconnect)
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:07:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.693576323 +0000 UTC m=+0.044380684 container create a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:07:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:07:00 compute-0 systemd[1]: Started libpod-conmon-a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b.scope.
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:00 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 20:07:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.675840629 +0000 UTC m=+0.026645010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.781499427 +0000 UTC m=+0.132303798 container init a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.788303388 +0000 UTC m=+0.139107759 container start a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:00 compute-0 competent_carver[90318]: 167 167
Nov 25 20:07:00 compute-0 systemd[1]: libpod-a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b.scope: Deactivated successfully.
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.79205356 +0000 UTC m=+0.142857941 container attach a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:07:00 compute-0 conmon[90318]: conmon a3669089ea8dd122fa1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b.scope/container/memory.events
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.793549493 +0000 UTC m=+0.144353864 container died a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 20:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d694c65e99bdc56cfa6f5a0123d5694886e110fca8a80b0036227bb1f71c4ff7-merged.mount: Deactivated successfully.
Nov 25 20:07:00 compute-0 podman[90301]: 2025-11-25 20:07:00.834845336 +0000 UTC m=+0.185649727 container remove a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:00 compute-0 systemd[1]: libpod-conmon-a3669089ea8dd122fa1ca88a47ce744fe0b2df0b4953e2342bfcdbd3ff2e620b.scope: Deactivated successfully.
Nov 25 20:07:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mon[75144]: Deploying daemon osd.2 on compute-0
Nov 25 20:07:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mon[75144]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 25 20:07:00 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662] boot
Nov 25 20:07:00 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 20:07:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:00 compute-0 ceph-osd[89084]: osd.0 9 state: booting -> active
Nov 25 20:07:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:00 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:00 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057448c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs mount
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs mount shared_bdev_used = 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Git sha 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DB SUMMARY
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DB Session ID:  F8IEC66F5Z41D9SARL68
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                     Options.env: 0x559058113d50
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                Options.info_log: 0x55905730a800
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.write_buffer_manager: 0x559058224460
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.row_cache: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                              Options.wal_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.wal_compression: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_background_jobs: 4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Compression algorithms supported:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kZSTD supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730ae60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 01d3e02d-c11c-48d5-b80d-2a41bcf546d5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221053346, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221053562, "job": 1, "event": "recovery_finished"}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: freelist init
Nov 25 20:07:01 compute-0 ceph-osd[90092]: freelist _read_cfg
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs umount
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.12917949 +0000 UTC m=+0.042622453 container create 03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684950302' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:07:01 compute-0 angry_euclid[90251]: 
Nov 25 20:07:01 compute-0 angry_euclid[90251]: {"fsid":"712dd110-763a-5547-8ef7-acda1414fdce","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":1,"osd_up_since":1764101220,"num_in_osds":3,"osd_in_since":1764101203,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T20:06:58.714937+0000","services":{}},"progress_events":{}}
Nov 25 20:07:01 compute-0 systemd[1]: libpod-be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6.scope: Deactivated successfully.
Nov 25 20:07:01 compute-0 systemd[1]: Started libpod-conmon-03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde.scope.
Nov 25 20:07:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bca2d9a73becaa5a4cd58ff6680911425c26486684b53808c8845a813eea271/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bca2d9a73becaa5a4cd58ff6680911425c26486684b53808c8845a813eea271/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bca2d9a73becaa5a4cd58ff6680911425c26486684b53808c8845a813eea271/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bca2d9a73becaa5a4cd58ff6680911425c26486684b53808c8845a813eea271/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bca2d9a73becaa5a4cd58ff6680911425c26486684b53808c8845a813eea271/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.108660403 +0000 UTC m=+0.022103376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.213230028 +0000 UTC m=+0.126673011 container init 03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:07:01 compute-0 podman[90583]: 2025-11-25 20:07:01.216121124 +0000 UTC m=+0.047855168 container died be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6 (image=quay.io/ceph/ceph:v18, name=angry_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.224570394 +0000 UTC m=+0.138013397 container start 03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.228384127 +0000 UTC m=+0.141827160 container attach 03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-46d7975544d33f5a10404489b9e695fb67e0a217f6cfa9942b9296c63b88db23-merged.mount: Deactivated successfully.
Nov 25 20:07:01 compute-0 podman[90583]: 2025-11-25 20:07:01.257080697 +0000 UTC m=+0.088814711 container remove be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6 (image=quay.io/ceph/ceph:v18, name=angry_euclid, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:01 compute-0 systemd[1]: libpod-conmon-be7b78738ecd7d4fe243fe78d25c5f21a3e3b543ef97c2759fbc5fc45dc3f6e6.scope: Deactivated successfully.
Nov 25 20:07:01 compute-0 sudo[90152]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bdev(0x559057449400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs mount
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluefs mount shared_bdev_used = 4718592
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Git sha 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DB SUMMARY
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DB Session ID:  F8IEC66F5Z41D9SARL69
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                     Options.env: 0x5590582d43f0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                Options.info_log: 0x55905730b200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.write_buffer_manager: 0x559058224460
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.row_cache: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                              Options.wal_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.wal_compression: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_background_jobs: 4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Compression algorithms supported:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kZSTD supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730a9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730af60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730af60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55905730af60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5590572f2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 01d3e02d-c11c-48d5-b80d-2a41bcf546d5
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221347714, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221355919, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101221, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01d3e02d-c11c-48d5-b80d-2a41bcf546d5", "db_session_id": "F8IEC66F5Z41D9SARL69", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221361591, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101221, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01d3e02d-c11c-48d5-b80d-2a41bcf546d5", "db_session_id": "F8IEC66F5Z41D9SARL69", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221364950, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101221, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01d3e02d-c11c-48d5-b80d-2a41bcf546d5", "db_session_id": "F8IEC66F5Z41D9SARL69", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101221366361, "job": 1, "event": "recovery_finished"}
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5590582e0000
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: DB pointer 0x559058215a00
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 25 20:07:01 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:07:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:07:01 compute-0 ceph-osd[90092]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 20:07:01 compute-0 ceph-osd[90092]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 20:07:01 compute-0 ceph-osd[90092]: _get_class not permitted to load lua
Nov 25 20:07:01 compute-0 ceph-osd[90092]: _get_class not permitted to load sdk
Nov 25 20:07:01 compute-0 ceph-osd[90092]: _get_class not permitted to load test_remote_reads
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 load_pgs
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 load_pgs opened 0 pgs
Nov 25 20:07:01 compute-0 ceph-osd[90092]: osd.1 0 log_to_monitors true
Nov 25 20:07:01 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1[90088]: 2025-11-25T20:07:01.395+0000 7fe181cdb740 -1 osd.1 0 log_to_monitors true
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 20:07:01 compute-0 sudo[90845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccrvjmxuwztatluqkiuwvydzefkrvyzg ; /usr/bin/python3'
Nov 25 20:07:01 compute-0 sudo[90845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:01 compute-0 python3[90847]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:01 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test[90591]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 20:07:01 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test[90591]:                             [--no-systemd] [--no-tmpfs]
Nov 25 20:07:01 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test[90591]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 20:07:01 compute-0 systemd[1]: libpod-03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde.scope: Deactivated successfully.
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.850188955 +0000 UTC m=+0.763631938 container died 03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:01 compute-0 podman[90848]: 2025-11-25 20:07:01.883662456 +0000 UTC m=+0.072084264 container create 272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429 (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bca2d9a73becaa5a4cd58ff6680911425c26486684b53808c8845a813eea271-merged.mount: Deactivated successfully.
Nov 25 20:07:01 compute-0 systemd[1]: Started libpod-conmon-272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429.scope.
Nov 25 20:07:01 compute-0 podman[90567]: 2025-11-25 20:07:01.934046708 +0000 UTC m=+0.847489671 container remove 03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:07:01 compute-0 systemd[1]: libpod-conmon-03e57b6ec90466ab37f508047b4bd083183af0a144a82cd9949653888a224dde.scope: Deactivated successfully.
Nov 25 20:07:01 compute-0 podman[90848]: 2025-11-25 20:07:01.851385391 +0000 UTC m=+0.039807239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:01 compute-0 ceph-mon[75144]: OSD bench result of 6485.334060 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 20:07:01 compute-0 ceph-mon[75144]: osd.0 [v2:192.168.122.100:6802/1624567662,v1:192.168.122.100:6803/1624567662] boot
Nov 25 20:07:01 compute-0 ceph-mon[75144]: osdmap e9: 3 total, 1 up, 3 in
Nov 25 20:07:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2684950302' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mon[75144]: from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 20:07:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8d27a3ac14829a96f21d443bf917a25b5bcb7a6c12ea9ae4798262dff44849/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8d27a3ac14829a96f21d443bf917a25b5bcb7a6c12ea9ae4798262dff44849/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:01 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:01 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:02 compute-0 podman[90848]: 2025-11-25 20:07:02.001886286 +0000 UTC m=+0.190308154 container init 272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429 (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:02 compute-0 podman[90848]: 2025-11-25 20:07:02.012379457 +0000 UTC m=+0.200801255 container start 272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429 (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:02 compute-0 podman[90848]: 2025-11-25 20:07:02.016535471 +0000 UTC m=+0.204957279 container attach 272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429 (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:07:02 compute-0 systemd[1]: Reloading.
Nov 25 20:07:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 20:07:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 20:07:02 compute-0 systemd-sysv-generator[90949]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:07:02 compute-0 systemd-rc-local-generator[90942]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3482490045' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:02 compute-0 systemd[1]: Reloading.
Nov 25 20:07:02 compute-0 systemd-sysv-generator[90991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:07:02 compute-0 systemd-rc-local-generator[90985]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:07:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 20:07:02 compute-0 ceph-mgr[75443]: [devicehealth INFO root] creating mgr pool
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 20:07:02 compute-0 systemd[1]: Starting Ceph osd.2 for 712dd110-763a-5547-8ef7-acda1414fdce...
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3482490045' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0 done with init, starting boot process
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0 start_boot
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 20:07:02 compute-0 ceph-osd[90092]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 25 20:07:02 compute-0 gifted_ishizaka[90876]: pool 'vms' created
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:02 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:02 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:02 compute-0 ceph-mon[75144]: from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 20:07:02 compute-0 ceph-mon[75144]: osdmap e10: 3 total, 1 up, 3 in
Nov 25 20:07:02 compute-0 ceph-mon[75144]: from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3482490045' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mon[75144]: pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 20:07:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 20:07:02 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/717055667; not ready for session (expect reconnect)
Nov 25 20:07:03 compute-0 systemd[1]: libpod-272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429.scope: Deactivated successfully.
Nov 25 20:07:03 compute-0 podman[90848]: 2025-11-25 20:07:03.009070755 +0000 UTC m=+1.197492553 container died 272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429 (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:07:03 compute-0 ceph-osd[89084]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 20:07:03 compute-0 ceph-osd[89084]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 20:07:03 compute-0 ceph-osd[89084]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 20:07:03 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 11 pg[2.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 25 20:07:03 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 20:07:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:03 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:03 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8d27a3ac14829a96f21d443bf917a25b5bcb7a6c12ea9ae4798262dff44849-merged.mount: Deactivated successfully.
Nov 25 20:07:03 compute-0 podman[90848]: 2025-11-25 20:07:03.097473743 +0000 UTC m=+1.285895551 container remove 272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429 (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:07:03 compute-0 systemd[1]: libpod-conmon-272eae4b18017203e5f4daa4bcf85a19476a3329d053feb7aeafe3bb8a9bc429.scope: Deactivated successfully.
Nov 25 20:07:03 compute-0 sudo[90845]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:03 compute-0 podman[91060]: 2025-11-25 20:07:03.189994762 +0000 UTC m=+0.047223490 container create 08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:07:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997c98a29acbfc0a62f9f30084fc8f637f3d452abdd8af90722044d0eff8fc81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997c98a29acbfc0a62f9f30084fc8f637f3d452abdd8af90722044d0eff8fc81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997c98a29acbfc0a62f9f30084fc8f637f3d452abdd8af90722044d0eff8fc81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997c98a29acbfc0a62f9f30084fc8f637f3d452abdd8af90722044d0eff8fc81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997c98a29acbfc0a62f9f30084fc8f637f3d452abdd8af90722044d0eff8fc81/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 podman[91060]: 2025-11-25 20:07:03.165823896 +0000 UTC m=+0.023052644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:03 compute-0 podman[91060]: 2025-11-25 20:07:03.280166491 +0000 UTC m=+0.137395299 container init 08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:03 compute-0 sudo[91101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unqvtditvdighjmukknkuldzuujmnoce ; /usr/bin/python3'
Nov 25 20:07:03 compute-0 sudo[91101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:03 compute-0 podman[91060]: 2025-11-25 20:07:03.299289217 +0000 UTC m=+0.156517945 container start 08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:03 compute-0 podman[91060]: 2025-11-25 20:07:03.30987167 +0000 UTC m=+0.167100488 container attach 08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:03 compute-0 python3[91104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:03 compute-0 podman[91106]: 2025-11-25 20:07:03.551501294 +0000 UTC m=+0.071171879 container create 95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950 (image=quay.io/ceph/ceph:v18, name=zen_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:03 compute-0 systemd[1]: Started libpod-conmon-95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950.scope.
Nov 25 20:07:03 compute-0 podman[91106]: 2025-11-25 20:07:03.52877999 +0000 UTC m=+0.048450635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82451a4780d075a20302231d78fd6e40e76718f671e22148acc55e2c65c6668/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82451a4780d075a20302231d78fd6e40e76718f671e22148acc55e2c65c6668/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:03 compute-0 podman[91106]: 2025-11-25 20:07:03.662380396 +0000 UTC m=+0.182051051 container init 95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950 (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:07:03 compute-0 podman[91106]: 2025-11-25 20:07:03.675253308 +0000 UTC m=+0.194923903 container start 95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950 (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:03 compute-0 podman[91106]: 2025-11-25 20:07:03.683915664 +0000 UTC m=+0.203586269 container attach 95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950 (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:07:03 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/717055667; not ready for session (expect reconnect)
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3482490045' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 20:07:04 compute-0 ceph-mon[75144]: osdmap e11: 3 total, 1 up, 3 in
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] creating main.db for devicehealth
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Check health
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.1 ()
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2377369155' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 20:07:04 compute-0 sudo[91179]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 25 20:07:04 compute-0 sudo[91179]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 20:07:04 compute-0 sudo[91179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 25 20:07:04 compute-0 sudo[91179]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 20:07:04 compute-0 bash[91060]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 20:07:04 compute-0 bash[91060]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 20:07:04 compute-0 bash[91060]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 20:07:04 compute-0 bash[91060]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:04 compute-0 bash[91060]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 20:07:04 compute-0 bash[91060]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 20:07:04 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate[91075]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 20:07:04 compute-0 bash[91060]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 20:07:04 compute-0 systemd[1]: libpod-08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704.scope: Deactivated successfully.
Nov 25 20:07:04 compute-0 podman[91060]: 2025-11-25 20:07:04.554415066 +0000 UTC m=+1.411643784 container died 08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:07:04 compute-0 systemd[1]: libpod-08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704.scope: Consumed 1.238s CPU time.
Nov 25 20:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-997c98a29acbfc0a62f9f30084fc8f637f3d452abdd8af90722044d0eff8fc81-merged.mount: Deactivated successfully.
Nov 25 20:07:04 compute-0 podman[91060]: 2025-11-25 20:07:04.693812932 +0000 UTC m=+1.551041690 container remove 08ee6ec5b3bfcc24cb2fea9b7512cbe48d59172ff520f9c4ee67dfb5a2d5c704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v37: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 20:07:04 compute-0 podman[91348]: 2025-11-25 20:07:04.967422553 +0000 UTC m=+0.038960935 container create 6261bc1abd1201dfec4c6c35bef191ca296ebed37d34ced207e545f0cadcb250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:04 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/717055667; not ready for session (expect reconnect)
Nov 25 20:07:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:04 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35134817acf527403836bdd2eaafd9a0c6a05da8ccc6fdeb5c8bb5e2fb046bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35134817acf527403836bdd2eaafd9a0c6a05da8ccc6fdeb5c8bb5e2fb046bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35134817acf527403836bdd2eaafd9a0c6a05da8ccc6fdeb5c8bb5e2fb046bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35134817acf527403836bdd2eaafd9a0c6a05da8ccc6fdeb5c8bb5e2fb046bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35134817acf527403836bdd2eaafd9a0c6a05da8ccc6fdeb5c8bb5e2fb046bb/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:05 compute-0 podman[91348]: 2025-11-25 20:07:05.040770104 +0000 UTC m=+0.112308486 container init 6261bc1abd1201dfec4c6c35bef191ca296ebed37d34ced207e545f0cadcb250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:07:05 compute-0 podman[91348]: 2025-11-25 20:07:04.949241105 +0000 UTC m=+0.020779447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:05 compute-0 podman[91348]: 2025-11-25 20:07:05.049173573 +0000 UTC m=+0.120711945 container start 6261bc1abd1201dfec4c6c35bef191ca296ebed37d34ced207e545f0cadcb250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hdjasd(active, since 68s)
Nov 25 20:07:05 compute-0 bash[91348]: 6261bc1abd1201dfec4c6c35bef191ca296ebed37d34ced207e545f0cadcb250
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2377369155' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 25 20:07:05 compute-0 zen_thompson[91121]: pool 'volumes' created
Nov 25 20:07:05 compute-0 systemd[1]: Started Ceph osd.2 for 712dd110-763a-5547-8ef7-acda1414fdce.
Nov 25 20:07:05 compute-0 ceph-mon[75144]: purged_snaps scrub starts
Nov 25 20:07:05 compute-0 ceph-mon[75144]: purged_snaps scrub ok
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 20:07:05 compute-0 ceph-mon[75144]: osdmap e12: 3 total, 1 up, 3 in
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2377369155' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: pgmap v37: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 20:07:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:05 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:05 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:05 compute-0 systemd[1]: libpod-95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950.scope: Deactivated successfully.
Nov 25 20:07:05 compute-0 podman[91106]: 2025-11-25 20:07:05.082613183 +0000 UTC m=+1.602283728 container died 95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950 (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:07:05 compute-0 sudo[90211]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:05 compute-0 ceph-osd[91367]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: pidfile_write: ignore empty --pid-file
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17d905800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17d905800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17d905800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17d905800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e73d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e73d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e73d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e73d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e73d800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 20:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82451a4780d075a20302231d78fd6e40e76718f671e22148acc55e2c65c6668-merged.mount: Deactivated successfully.
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:05 compute-0 podman[91106]: 2025-11-25 20:07:05.159326215 +0000 UTC m=+1.678996770 container remove 95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950 (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:07:05 compute-0 systemd[1]: libpod-conmon-95797c29211d57aa58364e8b43ace8a655e4c5c88f8572330ea60cd7bdc2e950.scope: Deactivated successfully.
Nov 25 20:07:05 compute-0 sudo[91101]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:05 compute-0 sudo[91393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:05 compute-0 sudo[91393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:05 compute-0 sudo[91393]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:05 compute-0 sudo[91418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:05 compute-0 sudo[91418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:05 compute-0 sudo[91418]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:05 compute-0 sudo[91465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddecrmpbfrlxipmajvzbcyfgpsehccpy ; /usr/bin/python3'
Nov 25 20:07:05 compute-0 sudo[91465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:05 compute-0 sudo[91468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:05 compute-0 sudo[91468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:05 compute-0 sudo[91468]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17d905800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 20:07:05 compute-0 sudo[91494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:07:05 compute-0 sudo[91494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:05 compute-0 python3[91472]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:05 compute-0 podman[91521]: 2025-11-25 20:07:05.510414118 +0000 UTC m=+0.048105165 container create ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74 (image=quay.io/ceph/ceph:v18, name=intelligent_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 34.180 iops: 8750.104 elapsed_sec: 0.343
Nov 25 20:07:05 compute-0 ceph-osd[90092]: log_channel(cluster) log [WRN] : OSD bench result of 8750.104174 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 0 waiting for initial osdmap
Nov 25 20:07:05 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1[90088]: 2025-11-25T20:07:05.521+0000 7fe17dc5b640 -1 osd.1 0 waiting for initial osdmap
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 20:07:05 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-1[90088]: 2025-11-25T20:07:05.548+0000 7fe179283640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 set_numa_affinity not setting numa affinity
Nov 25 20:07:05 compute-0 ceph-osd[90092]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 25 20:07:05 compute-0 systemd[1]: Started libpod-conmon-ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74.scope.
Nov 25 20:07:05 compute-0 podman[91521]: 2025-11-25 20:07:05.492086825 +0000 UTC m=+0.029777852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4378e32e7938a813bcf4be1b98e19d0b3b2cd1836793fe2d323db39720750675/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4378e32e7938a813bcf4be1b98e19d0b3b2cd1836793fe2d323db39720750675/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:05 compute-0 podman[91521]: 2025-11-25 20:07:05.612267923 +0000 UTC m=+0.149959030 container init ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74 (image=quay.io/ceph/ceph:v18, name=intelligent_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:05 compute-0 podman[91521]: 2025-11-25 20:07:05.620875788 +0000 UTC m=+0.158566815 container start ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74 (image=quay.io/ceph/ceph:v18, name=intelligent_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:07:05 compute-0 podman[91521]: 2025-11-25 20:07:05.625019211 +0000 UTC m=+0.162710268 container attach ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74 (image=quay.io/ceph/ceph:v18, name=intelligent_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 25 20:07:05 compute-0 ceph-osd[91367]: load: jerasure load: lrc 
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.814440689 +0000 UTC m=+0.063003446 container create c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:07:05 compute-0 systemd[1]: Started libpod-conmon-c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5.scope.
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.785789431 +0000 UTC m=+0.034352238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.90262103 +0000 UTC m=+0.151183847 container init c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.913451661 +0000 UTC m=+0.162014418 container start c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.917413587 +0000 UTC m=+0.165976424 container attach c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:07:05 compute-0 peaceful_pike[91609]: 167 167
Nov 25 20:07:05 compute-0 systemd[1]: libpod-c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5.scope: Deactivated successfully.
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.919106608 +0000 UTC m=+0.167669335 container died c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e684536bc7e605e587702d6c85d34ec77b8598116766c5423470bb7ba105207-merged.mount: Deactivated successfully.
Nov 25 20:07:05 compute-0 ceph-osd[91367]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 20:07:05 compute-0 ceph-osd[91367]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c8c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluefs mount
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 20:07:05 compute-0 podman[91592]: 2025-11-25 20:07:05.955124364 +0000 UTC m=+0.203687091 container remove c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluefs mount shared_bdev_used = 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Git sha 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: DB SUMMARY
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: DB Session ID:  CCB4UB7IZ11LWMW8746Z
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                                     Options.env: 0x55b17e78fc70
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                                Options.info_log: 0x55b17d98c8a0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.write_buffer_manager: 0x55b17e8a2460
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.row_cache: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                              Options.wal_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.wal_compression: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.max_background_jobs: 4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Compression algorithms supported:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kZSTD supported: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d979090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d979090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d979090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6c474599-1ebd-4379-83ca-7c5e9d4d2889
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101225990951, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101225991195, "job": 1, "event": "recovery_finished"}
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 25 20:07:05 compute-0 ceph-osd[91367]: freelist init
Nov 25 20:07:05 compute-0 ceph-osd[91367]: freelist _read_cfg
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 20:07:05 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bluefs umount
Nov 25 20:07:05 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 20:07:05 compute-0 systemd[1]: libpod-conmon-c7cce8d096ca7c37847b9287053fb85ef822515d93db48aba5abcf9cfa0ba8b5.scope: Deactivated successfully.
Nov 25 20:07:05 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/717055667; not ready for session (expect reconnect)
Nov 25 20:07:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:05 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 20:07:06 compute-0 ceph-mon[75144]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mgrmap e9: compute-0.hdjasd(active, since 68s)
Nov 25 20:07:06 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2377369155' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:06 compute-0 ceph-mon[75144]: osdmap e13: 3 total, 1 up, 3 in
Nov 25 20:07:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 25 20:07:06 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667] boot
Nov 25 20:07:06 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 20:07:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:06 compute-0 ceph-osd[90092]: osd.1 14 state: booting -> active
Nov 25 20:07:06 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:06 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:06 compute-0 podman[91847]: 2025-11-25 20:07:06.152881138 +0000 UTC m=+0.057138812 container create 853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:07:06 compute-0 systemd[1]: Started libpod-conmon-853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779.scope.
Nov 25 20:07:06 compute-0 podman[91847]: 2025-11-25 20:07:06.127414725 +0000 UTC m=+0.031672409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 20:07:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2205905855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bdev(0x55b17e7c9400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluefs mount
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluefs mount shared_bdev_used = 4718592
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: RocksDB version: 7.9.2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Git sha 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: DB SUMMARY
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: DB Session ID:  CCB4UB7IZ11LWMW8746Y
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: CURRENT file:  CURRENT
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                         Options.error_if_exists: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.create_if_missing: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                                     Options.env: 0x55b17e94aaf0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                                Options.info_log: 0x55b17d98c620
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                              Options.statistics: (nil)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.use_fsync: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                              Options.db_log_dir: 
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.write_buffer_manager: 0x55b17e8a26e0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.unordered_write: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.row_cache: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                              Options.wal_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.two_write_queues: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.wal_compression: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.atomic_flush: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.max_background_jobs: 4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.max_background_compactions: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.max_subcompactions: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.max_open_files: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Compression algorithms supported:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kZSTD supported: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kXpressCompression supported: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kBZip2Compression supported: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kLZ4Compression supported: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kZlibCompression supported: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         kSnappyCompression supported: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695719fef5efb8f65d62e28367fee92bc254f8459c27e5c123c0a4eba115e48d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695719fef5efb8f65d62e28367fee92bc254f8459c27e5c123c0a4eba115e48d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695719fef5efb8f65d62e28367fee92bc254f8459c27e5c123c0a4eba115e48d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695719fef5efb8f65d62e28367fee92bc254f8459c27e5c123c0a4eba115e48d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d9791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d979090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d979090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 podman[91847]: 2025-11-25 20:07:06.27145737 +0000 UTC m=+0.175715124 container init 853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_liskov, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:           Options.merge_operator: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b17d98c380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b17d979090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.compression: LZ4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.num_levels: 7
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.bloom_locality: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                               Options.ttl: 2592000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                       Options.enable_blob_files: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                           Options.min_blob_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6c474599-1ebd-4379-83ca-7c5e9d4d2889
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101226248987, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101226261375, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101226, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c474599-1ebd-4379-83ca-7c5e9d4d2889", "db_session_id": "CCB4UB7IZ11LWMW8746Y", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101226264312, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101226, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c474599-1ebd-4379-83ca-7c5e9d4d2889", "db_session_id": "CCB4UB7IZ11LWMW8746Y", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101226269786, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101226, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c474599-1ebd-4379-83ca-7c5e9d4d2889", "db_session_id": "CCB4UB7IZ11LWMW8746Y", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101226271298, "job": 1, "event": "recovery_finished"}
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 20:07:06 compute-0 podman[91847]: 2025-11-25 20:07:06.284610318 +0000 UTC m=+0.188867982 container start 853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:06 compute-0 podman[91847]: 2025-11-25 20:07:06.287873625 +0000 UTC m=+0.192131359 container attach 853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b17dae6000
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: DB pointer 0x55b17e88ba00
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 25 20:07:06 compute-0 ceph-osd[91367]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:07:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:07:06 compute-0 ceph-osd[91367]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 20:07:06 compute-0 ceph-osd[91367]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 20:07:06 compute-0 ceph-osd[91367]: _get_class not permitted to load lua
Nov 25 20:07:06 compute-0 ceph-osd[91367]: _get_class not permitted to load sdk
Nov 25 20:07:06 compute-0 ceph-osd[91367]: _get_class not permitted to load test_remote_reads
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 load_pgs
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 load_pgs opened 0 pgs
Nov 25 20:07:06 compute-0 ceph-osd[91367]: osd.2 0 log_to_monitors true
Nov 25 20:07:06 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2[91363]: 2025-11-25T20:07:06.302+0000 7f305eeb8740 -1 osd.2 0 log_to_monitors true
Nov 25 20:07:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 25 20:07:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 20:07:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v40: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 25 20:07:07 compute-0 ceph-mon[75144]: OSD bench result of 8750.104174 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 20:07:07 compute-0 ceph-mon[75144]: osd.1 [v2:192.168.122.100:6806/717055667,v1:192.168.122.100:6807/717055667] boot
Nov 25 20:07:07 compute-0 ceph-mon[75144]: osdmap e14: 3 total, 2 up, 3 in
Nov 25 20:07:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 20:07:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:07 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2205905855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:07 compute-0 ceph-mon[75144]: from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 20:07:07 compute-0 ceph-mon[75144]: pgmap v40: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2205905855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 20:07:07 compute-0 intelligent_knuth[91544]: pool 'backups' created
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e15 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:07 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:07 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:07 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:07 compute-0 systemd[1]: libpod-ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74.scope: Deactivated successfully.
Nov 25 20:07:07 compute-0 podman[91521]: 2025-11-25 20:07:07.196471865 +0000 UTC m=+1.734162912 container died ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74 (image=quay.io/ceph/ceph:v18, name=intelligent_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 20:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4378e32e7938a813bcf4be1b98e19d0b3b2cd1836793fe2d323db39720750675-merged.mount: Deactivated successfully.
Nov 25 20:07:07 compute-0 podman[91521]: 2025-11-25 20:07:07.262462668 +0000 UTC m=+1.800153715 container remove ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74 (image=quay.io/ceph/ceph:v18, name=intelligent_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 20:07:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 20:07:07 compute-0 systemd[1]: libpod-conmon-ca091ca9f5b3ed5b910d3addea362fd596a2675a56057b9cf84e6f6f7ee27f74.scope: Deactivated successfully.
Nov 25 20:07:07 compute-0 sudo[91465]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 epic_liskov[91863]: {
Nov 25 20:07:07 compute-0 epic_liskov[91863]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "osd_id": 2,
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "type": "bluestore"
Nov 25 20:07:07 compute-0 epic_liskov[91863]:     },
Nov 25 20:07:07 compute-0 epic_liskov[91863]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "osd_id": 1,
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "type": "bluestore"
Nov 25 20:07:07 compute-0 epic_liskov[91863]:     },
Nov 25 20:07:07 compute-0 epic_liskov[91863]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "osd_id": 0,
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:07:07 compute-0 epic_liskov[91863]:         "type": "bluestore"
Nov 25 20:07:07 compute-0 epic_liskov[91863]:     }
Nov 25 20:07:07 compute-0 epic_liskov[91863]: }
Nov 25 20:07:07 compute-0 systemd[1]: libpod-853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779.scope: Deactivated successfully.
Nov 25 20:07:07 compute-0 podman[91847]: 2025-11-25 20:07:07.376388991 +0000 UTC m=+1.280646685 container died 853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:07 compute-0 systemd[1]: libpod-853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779.scope: Consumed 1.087s CPU time.
Nov 25 20:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-695719fef5efb8f65d62e28367fee92bc254f8459c27e5c123c0a4eba115e48d-merged.mount: Deactivated successfully.
Nov 25 20:07:07 compute-0 podman[91847]: 2025-11-25 20:07:07.444422525 +0000 UTC m=+1.348680199 container remove 853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_liskov, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:07:07 compute-0 systemd[1]: libpod-conmon-853b343a85c7a7f27fe016660595eb8164fee88a27937308c8c8ca39e5834779.scope: Deactivated successfully.
Nov 25 20:07:07 compute-0 sudo[92164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwfjahhyzztrkclozzbqjpxjpfbjivh ; /usr/bin/python3'
Nov 25 20:07:07 compute-0 sudo[92164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:07 compute-0 sudo[91494]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:07 compute-0 sudo[92167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:07 compute-0 sudo[92167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:07 compute-0 sudo[92167]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 python3[92166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:07 compute-0 sudo[92192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:07:07 compute-0 sudo[92192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:07 compute-0 sudo[92192]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 sudo[92224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:07 compute-0 podman[92213]: 2025-11-25 20:07:07.741232182 +0000 UTC m=+0.078976849 container create 1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3 (image=quay.io/ceph/ceph:v18, name=brave_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:07 compute-0 sudo[92224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:07 compute-0 sudo[92224]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 systemd[1]: Started libpod-conmon-1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3.scope.
Nov 25 20:07:07 compute-0 podman[92213]: 2025-11-25 20:07:07.706049331 +0000 UTC m=+0.043794048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45931cc4de6389f19ba109d12ef2381372263dd1711ebd0acb4838efd914c01d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45931cc4de6389f19ba109d12ef2381372263dd1711ebd0acb4838efd914c01d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:07 compute-0 sudo[92255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:07 compute-0 sudo[92255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:07 compute-0 sudo[92255]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 podman[92213]: 2025-11-25 20:07:07.847270232 +0000 UTC m=+0.185014949 container init 1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3 (image=quay.io/ceph/ceph:v18, name=brave_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:07:07 compute-0 podman[92213]: 2025-11-25 20:07:07.859131582 +0000 UTC m=+0.196876239 container start 1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3 (image=quay.io/ceph/ceph:v18, name=brave_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:07:07 compute-0 podman[92213]: 2025-11-25 20:07:07.862772661 +0000 UTC m=+0.200517338 container attach 1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3 (image=quay.io/ceph/ceph:v18, name=brave_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:07 compute-0 sudo[92285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:07 compute-0 sudo[92285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:07 compute-0 sudo[92285]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:07 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:08 compute-0 sudo[92311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:07:08 compute-0 sudo[92311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 25 20:07:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 20:07:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0 done with init, starting boot process
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0 start_boot
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 20:07:08 compute-0 ceph-osd[91367]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 25 20:07:08 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 25 20:07:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 16 pg[2.0( v 12'32 (0'0,12'32] local-lis/les=11/12 n=2 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=16 pruub=11.860945702s) [] r=-1 lpr=16 pi=[11,16)/1 crt=12'32 lcod 12'31 mlcod 12'31 active pruub 24.029441833s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:07:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 16 pg[2.0( v 12'32 (0'0,12'32] local-lis/les=11/12 n=2 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=16 pruub=11.860945702s) [] r=-1 lpr=16 pi=[11,16)/1 crt=12'32 lcod 12'31 mlcod 0'0 unknown NOTIFY pruub 24.029441833s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:07:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:08 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:08 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:08 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2205905855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:08 compute-0 ceph-mon[75144]: from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 20:07:08 compute-0 ceph-mon[75144]: osdmap e15: 3 total, 2 up, 3 in
Nov 25 20:07:08 compute-0 ceph-mon[75144]: from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 20:07:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:08 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1291074225; not ready for session (expect reconnect)
Nov 25 20:07:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:08 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:08 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 20:07:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2553797956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:08 compute-0 podman[92424]: 2025-11-25 20:07:08.625767329 +0000 UTC m=+0.103613278 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v43: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 25 20:07:08 compute-0 podman[92424]: 2025-11-25 20:07:08.7473963 +0000 UTC m=+0.225242249 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:07:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 25 20:07:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2553797956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 25 20:07:09 compute-0 brave_grothendieck[92275]: pool 'images' created
Nov 25 20:07:09 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1291074225; not ready for session (expect reconnect)
Nov 25 20:07:09 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 25 20:07:09 compute-0 systemd[1]: libpod-1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3.scope: Deactivated successfully.
Nov 25 20:07:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:09 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:09 compute-0 podman[92213]: 2025-11-25 20:07:09.205559854 +0000 UTC m=+1.543304531 container died 1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3 (image=quay.io/ceph/ceph:v18, name=brave_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:07:09 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:09 compute-0 ceph-mon[75144]: from='osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 20:07:09 compute-0 ceph-mon[75144]: osdmap e16: 3 total, 2 up, 3 in
Nov 25 20:07:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:09 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2553797956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:09 compute-0 ceph-mon[75144]: pgmap v43: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 25 20:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-45931cc4de6389f19ba109d12ef2381372263dd1711ebd0acb4838efd914c01d-merged.mount: Deactivated successfully.
Nov 25 20:07:09 compute-0 podman[92213]: 2025-11-25 20:07:09.278219286 +0000 UTC m=+1.615963933 container remove 1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3 (image=quay.io/ceph/ceph:v18, name=brave_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:09 compute-0 systemd[1]: libpod-conmon-1cf9d5c70e4b139aab710fced46f52f51f11b6357ffbd82566f4d0840969daa3.scope: Deactivated successfully.
Nov 25 20:07:09 compute-0 sudo[92164]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:09 compute-0 sudo[92311]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:09 compute-0 sudo[92560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:09 compute-0 sudo[92598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqapljsrfvtjgqxsknpuwbkwnykickyn ; /usr/bin/python3'
Nov 25 20:07:09 compute-0 sudo[92560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:09 compute-0 sudo[92598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:09 compute-0 sudo[92560]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:09 compute-0 sudo[92606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:09 compute-0 sudo[92606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:09 compute-0 sudo[92606]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:09 compute-0 sudo[92631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:09 compute-0 sudo[92631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:09 compute-0 sudo[92631]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:09 compute-0 python3[92605]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:09 compute-0 sudo[92656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- inventory --format=json-pretty --filter-for-batch
Nov 25 20:07:09 compute-0 sudo[92656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:09 compute-0 podman[92666]: 2025-11-25 20:07:09.683513765 +0000 UTC m=+0.051308921 container create e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb (image=quay.io/ceph/ceph:v18, name=silly_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:07:09 compute-0 systemd[1]: Started libpod-conmon-e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb.scope.
Nov 25 20:07:09 compute-0 podman[92666]: 2025-11-25 20:07:09.656573717 +0000 UTC m=+0.024368883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2a0a589bad2302a1afd49d7d44b0e673f15d4552254dadcfc4d6c83369a72/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2a0a589bad2302a1afd49d7d44b0e673f15d4552254dadcfc4d6c83369a72/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:09 compute-0 podman[92666]: 2025-11-25 20:07:09.788191693 +0000 UTC m=+0.155986849 container init e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb (image=quay.io/ceph/ceph:v18, name=silly_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:09 compute-0 podman[92666]: 2025-11-25 20:07:09.821938633 +0000 UTC m=+0.189733819 container start e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb (image=quay.io/ceph/ceph:v18, name=silly_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:07:09 compute-0 podman[92666]: 2025-11-25 20:07:09.82961704 +0000 UTC m=+0.197412176 container attach e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb (image=quay.io/ceph/ceph:v18, name=silly_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.021429558 +0000 UTC m=+0.052307870 container create ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:07:10 compute-0 systemd[1]: Started libpod-conmon-ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d.scope.
Nov 25 20:07:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.001710365 +0000 UTC m=+0.032588707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.103062155 +0000 UTC m=+0.133940487 container init ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.11067037 +0000 UTC m=+0.141548682 container start ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:10 compute-0 zen_noether[92755]: 167 167
Nov 25 20:07:10 compute-0 systemd[1]: libpod-ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d.scope: Deactivated successfully.
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.144139931 +0000 UTC m=+0.175018283 container attach ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.145074499 +0000 UTC m=+0.175952821 container died ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:07:10 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1291074225; not ready for session (expect reconnect)
Nov 25 20:07:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:10 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-82c84d6bc2ed8f007d375a83346f66eeb9fdbe4f08fb408814938e4637f154a9-merged.mount: Deactivated successfully.
Nov 25 20:07:10 compute-0 podman[92741]: 2025-11-25 20:07:10.209403783 +0000 UTC m=+0.240282085 container remove ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:07:10 compute-0 systemd[1]: libpod-conmon-ec3bdb69435e40271c9021f1c7320dd482a8a2ed80d372221cf40cd86d1d002d.scope: Deactivated successfully.
Nov 25 20:07:10 compute-0 ceph-mon[75144]: purged_snaps scrub starts
Nov 25 20:07:10 compute-0 ceph-mon[75144]: purged_snaps scrub ok
Nov 25 20:07:10 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2553797956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:10 compute-0 ceph-mon[75144]: osdmap e17: 3 total, 2 up, 3 in
Nov 25 20:07:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:10 compute-0 podman[92798]: 2025-11-25 20:07:10.343875944 +0000 UTC m=+0.035260884 container create f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:10 compute-0 systemd[1]: Started libpod-conmon-f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065.scope.
Nov 25 20:07:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 20:07:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1176851946' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a53b1b8f5835fcc315508729347b06f48a924d67626d48628cc253e8fd669e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a53b1b8f5835fcc315508729347b06f48a924d67626d48628cc253e8fd669e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a53b1b8f5835fcc315508729347b06f48a924d67626d48628cc253e8fd669e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a53b1b8f5835fcc315508729347b06f48a924d67626d48628cc253e8fd669e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:10 compute-0 podman[92798]: 2025-11-25 20:07:10.422885203 +0000 UTC m=+0.114270163 container init f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:10 compute-0 podman[92798]: 2025-11-25 20:07:10.329909821 +0000 UTC m=+0.021294791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:10 compute-0 podman[92798]: 2025-11-25 20:07:10.4329094 +0000 UTC m=+0.124294340 container start f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:07:10 compute-0 podman[92798]: 2025-11-25 20:07:10.435919289 +0000 UTC m=+0.127304229 container attach f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.147 iops: 8485.733 elapsed_sec: 0.354
Nov 25 20:07:10 compute-0 ceph-osd[91367]: log_channel(cluster) log [WRN] : OSD bench result of 8485.732708 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 0 waiting for initial osdmap
Nov 25 20:07:10 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2[91363]: 2025-11-25T20:07:10.664+0000 7f305ae38640 -1 osd.2 0 waiting for initial osdmap
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 check_osdmap_features require_osd_release unknown -> reef
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 20:07:10 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-osd-2[91363]: 2025-11-25T20:07:10.705+0000 7f3056460640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 set_numa_affinity not setting numa affinity
Nov 25 20:07:10 compute-0 ceph-osd[91367]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 25 20:07:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v45: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 25 20:07:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:11 compute-0 ceph-mgr[75443]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1291074225; not ready for session (expect reconnect)
Nov 25 20:07:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:11 compute-0 ceph-mgr[75443]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 20:07:11 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 25 20:07:11 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1176851946' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:11 compute-0 ceph-mon[75144]: pgmap v45: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 25 20:07:11 compute-0 ceph-mon[75144]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:11 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:11 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1176851946' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 25 20:07:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225] boot
Nov 25 20:07:11 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 25 20:07:11 compute-0 silly_mirzakhani[92695]: pool 'cephfs.cephfs.meta' created
Nov 25 20:07:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 20:07:11 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:11 compute-0 ceph-osd[91367]: osd.2 18 state: booting -> active
Nov 25 20:07:11 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:11 compute-0 systemd[1]: libpod-e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb.scope: Deactivated successfully.
Nov 25 20:07:11 compute-0 podman[92666]: 2025-11-25 20:07:11.28035591 +0000 UTC m=+1.648151086 container died e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb (image=quay.io/ceph/ceph:v18, name=silly_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4da2a0a589bad2302a1afd49d7d44b0e673f15d4552254dadcfc4d6c83369a72-merged.mount: Deactivated successfully.
Nov 25 20:07:11 compute-0 podman[92666]: 2025-11-25 20:07:11.34321838 +0000 UTC m=+1.711013576 container remove e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb (image=quay.io/ceph/ceph:v18, name=silly_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:07:11 compute-0 systemd[1]: libpod-conmon-e87f61731cfd3531a69b4623e78823fd9fe86b46d09231037cc0b3058ead50cb.scope: Deactivated successfully.
Nov 25 20:07:11 compute-0 sudo[92598]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:11 compute-0 sudo[92903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiydwvgsvarrgofmkbbwlsucsymsuibc ; /usr/bin/python3'
Nov 25 20:07:11 compute-0 sudo[92903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:11 compute-0 python3[92995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:11 compute-0 podman[93567]: 2025-11-25 20:07:11.785135544 +0000 UTC m=+0.048780266 container create 3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e (image=quay.io/ceph/ceph:v18, name=adoring_galois, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:11 compute-0 systemd[1]: Started libpod-conmon-3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e.scope.
Nov 25 20:07:11 compute-0 podman[93567]: 2025-11-25 20:07:11.762451422 +0000 UTC m=+0.026096214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d463e8fc5e825a78cf92189b8666931e0c39ee8574123f9f7f3ad23d1ef4057/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d463e8fc5e825a78cf92189b8666931e0c39ee8574123f9f7f3ad23d1ef4057/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:11 compute-0 podman[93567]: 2025-11-25 20:07:11.880086695 +0000 UTC m=+0.143731447 container init 3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e (image=quay.io/ceph/ceph:v18, name=adoring_galois, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:11 compute-0 podman[93567]: 2025-11-25 20:07:11.890131802 +0000 UTC m=+0.153776504 container start 3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e (image=quay.io/ceph/ceph:v18, name=adoring_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:07:11 compute-0 podman[93567]: 2025-11-25 20:07:11.893852653 +0000 UTC m=+0.157497365 container attach 3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e (image=quay.io/ceph/ceph:v18, name=adoring_galois, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:07:11 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:11 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 18 pg[2.0( v 12'32 (0'0,12'32] local-lis/les=11/12 n=2 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=18 pruub=8.042924881s) [2] r=-1 lpr=18 pi=[11,18)/1 crt=12'32 lcod 12'31 mlcod 0'0 unknown NOTIFY pruub 24.029441833s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:07:11 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 18 pg[2.0( v 12'32 (0'0,12'32] local-lis/les=11/12 n=2 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=18 pruub=8.042855263s) [2] r=-1 lpr=18 pi=[11,18)/1 crt=12'32 lcod 12'31 mlcod 0'0 unknown NOTIFY pruub 24.029441833s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:07:12 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=18) [2] r=0 lpr=18 pi=[11,18)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]: [
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:     {
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "available": false,
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "ceph_device": false,
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "lsm_data": {},
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "lvs": [],
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "path": "/dev/sr0",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "rejected_reasons": [
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "Has a FileSystem",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "Insufficient space (<5GB)"
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         ],
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         "sys_api": {
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "actuators": null,
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "device_nodes": "sr0",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "devname": "sr0",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "human_readable_size": "482.00 KB",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "id_bus": "ata",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "model": "QEMU DVD-ROM",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "nr_requests": "2",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "parent": "/dev/sr0",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "partitions": {},
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "path": "/dev/sr0",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "removable": "1",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "rev": "2.5+",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "ro": "0",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "rotational": "1",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "sas_address": "",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "sas_device_handle": "",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "scheduler_mode": "mq-deadline",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "sectors": 0,
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "sectorsize": "2048",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "size": 493568.0,
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "support_discard": "2048",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "type": "disk",
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:             "vendor": "QEMU"
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:         }
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]:     }
Nov 25 20:07:12 compute-0 dazzling_sinoussi[92816]: ]
Nov 25 20:07:12 compute-0 systemd[1]: libpod-f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065.scope: Deactivated successfully.
Nov 25 20:07:12 compute-0 systemd[1]: libpod-f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065.scope: Consumed 1.720s CPU time.
Nov 25 20:07:12 compute-0 podman[94849]: 2025-11-25 20:07:12.159878458 +0000 UTC m=+0.047929080 container died f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a53b1b8f5835fcc315508729347b06f48a924d67626d48628cc253e8fd669e-merged.mount: Deactivated successfully.
Nov 25 20:07:12 compute-0 podman[94849]: 2025-11-25 20:07:12.220783111 +0000 UTC m=+0.108833663 container remove f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sinoussi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:07:12 compute-0 systemd[1]: libpod-conmon-f85163e5964ed01aaa3d8883483f06d1e1d4966025999fc48c5cb93d1ddd5065.scope: Deactivated successfully.
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 25 20:07:12 compute-0 ceph-mon[75144]: OSD bench result of 8485.732708 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 20:07:12 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1176851946' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:12 compute-0 ceph-mon[75144]: osd.2 [v2:192.168.122.100:6810/1291074225,v1:192.168.122.100:6811/1291074225] boot
Nov 25 20:07:12 compute-0 ceph-mon[75144]: osdmap e18: 3 total, 3 up, 3 in
Nov 25 20:07:12 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 25 20:07:12 compute-0 sudo[92656]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:12 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:12 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 19 pg[2.0( v 12'32 lc 12'30 (0'0,12'32] local-lis/les=18/19 n=2 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=18) [2] r=0 lpr=18 pi=[11,18)/1 crt=12'32 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:12 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 9a11ec90-5316-4a37-8364-6ae0db67596b does not exist
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 47c4adc7-cd2d-4999-8cf8-0ba8f7697a33 does not exist
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev a30b60f4-8e55-49a2-96dd-abb6afcf8089 does not exist
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:12 compute-0 sudo[94883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:12 compute-0 sudo[94883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:12 compute-0 sudo[94883]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:12 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 20:07:12 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2746448818' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:12 compute-0 sudo[94908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:12 compute-0 sudo[94908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:12 compute-0 sudo[94908]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:12 compute-0 sudo[94936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:12 compute-0 sudo[94936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:12 compute-0 sudo[94936]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:12 compute-0 sudo[94961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:07:12 compute-0 sudo[94961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v48: 6 pgs: 1 creating+peering, 1 peering, 1 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.111680496 +0000 UTC m=+0.078182666 container create 208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:07:13 compute-0 systemd[1]: Started libpod-conmon-208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73.scope.
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.080843323 +0000 UTC m=+0.047345553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.207782792 +0000 UTC m=+0.174284952 container init 208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.217974293 +0000 UTC m=+0.184476453 container start 208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.222101506 +0000 UTC m=+0.188603636 container attach 208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:13 compute-0 mystifying_mayer[95040]: 167 167
Nov 25 20:07:13 compute-0 systemd[1]: libpod-208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73.scope: Deactivated successfully.
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.225252909 +0000 UTC m=+0.191755039 container died 208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bbb53ebaa601ce0c25926a1093f10cc59d469185779ecc4fdeebfad7da20c8e-merged.mount: Deactivated successfully.
Nov 25 20:07:13 compute-0 podman[95023]: 2025-11-25 20:07:13.262345777 +0000 UTC m=+0.228847907 container remove 208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:07:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 25 20:07:13 compute-0 ceph-mon[75144]: osdmap e19: 3 total, 3 up, 3 in
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:07:13 compute-0 ceph-mon[75144]: Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2746448818' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 20:07:13 compute-0 ceph-mon[75144]: pgmap v48: 6 pgs: 1 creating+peering, 1 peering, 1 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:13 compute-0 systemd[1]: libpod-conmon-208e89c753ac11f8a94212b95b58ec761b1834aad881de14c3baa9d09980ee73.scope: Deactivated successfully.
Nov 25 20:07:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2746448818' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 25 20:07:13 compute-0 adoring_galois[93924]: pool 'cephfs.cephfs.data' created
Nov 25 20:07:13 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 25 20:07:13 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:13 compute-0 systemd[1]: libpod-3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e.scope: Deactivated successfully.
Nov 25 20:07:13 compute-0 podman[93567]: 2025-11-25 20:07:13.333098811 +0000 UTC m=+1.596743543 container died 3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e (image=quay.io/ceph/ceph:v18, name=adoring_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d463e8fc5e825a78cf92189b8666931e0c39ee8574123f9f7f3ad23d1ef4057-merged.mount: Deactivated successfully.
Nov 25 20:07:13 compute-0 podman[93567]: 2025-11-25 20:07:13.384016089 +0000 UTC m=+1.647660791 container remove 3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e (image=quay.io/ceph/ceph:v18, name=adoring_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:07:13 compute-0 systemd[1]: libpod-conmon-3b69d8a2f00e22dc62d73e27e77f6029216f9cc466df92e10adff70007bee60e.scope: Deactivated successfully.
Nov 25 20:07:13 compute-0 sudo[92903]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:13 compute-0 podman[95074]: 2025-11-25 20:07:13.45666966 +0000 UTC m=+0.060192053 container create 356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:13 compute-0 systemd[1]: Started libpod-conmon-356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340.scope.
Nov 25 20:07:13 compute-0 podman[95074]: 2025-11-25 20:07:13.428355221 +0000 UTC m=+0.031877654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be7f789dbfc31b7c02f33889e910e8c4ce308e5221ba50e2cc6c5c83c30f35d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be7f789dbfc31b7c02f33889e910e8c4ce308e5221ba50e2cc6c5c83c30f35d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be7f789dbfc31b7c02f33889e910e8c4ce308e5221ba50e2cc6c5c83c30f35d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be7f789dbfc31b7c02f33889e910e8c4ce308e5221ba50e2cc6c5c83c30f35d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be7f789dbfc31b7c02f33889e910e8c4ce308e5221ba50e2cc6c5c83c30f35d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 podman[95074]: 2025-11-25 20:07:13.559417142 +0000 UTC m=+0.162939535 container init 356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:13 compute-0 podman[95074]: 2025-11-25 20:07:13.571304874 +0000 UTC m=+0.174827257 container start 356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 20:07:13 compute-0 podman[95074]: 2025-11-25 20:07:13.574736566 +0000 UTC m=+0.178258959 container attach 356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ganguly, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:07:13 compute-0 sudo[95118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmwwzmcbivnxkvirgvhefmaguxvjkvo ; /usr/bin/python3'
Nov 25 20:07:13 compute-0 sudo[95118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:13 compute-0 python3[95120]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:13 compute-0 podman[95121]: 2025-11-25 20:07:13.812633018 +0000 UTC m=+0.060343937 container create 177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:13 compute-0 systemd[1]: Started libpod-conmon-177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd.scope.
Nov 25 20:07:13 compute-0 podman[95121]: 2025-11-25 20:07:13.779964782 +0000 UTC m=+0.027675751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526a991f07e2e316f5c52a24016f7414861c434a650eae558e7bd03922e082ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526a991f07e2e316f5c52a24016f7414861c434a650eae558e7bd03922e082ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:13 compute-0 podman[95121]: 2025-11-25 20:07:13.915210265 +0000 UTC m=+0.162921214 container init 177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:07:13 compute-0 podman[95121]: 2025-11-25 20:07:13.924935454 +0000 UTC m=+0.172646363 container start 177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:07:13 compute-0 podman[95121]: 2025-11-25 20:07:13.929589641 +0000 UTC m=+0.177300630 container attach 177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd (image=quay.io/ceph/ceph:v18, name=sad_chaum, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:07:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 25 20:07:14 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2746448818' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 20:07:14 compute-0 ceph-mon[75144]: osdmap e20: 3 total, 3 up, 3 in
Nov 25 20:07:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 25 20:07:14 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 25 20:07:14 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:07:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 25 20:07:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1409700090' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 20:07:14 compute-0 eloquent_ganguly[95090]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:07:14 compute-0 eloquent_ganguly[95090]: --> relative data size: 1.0
Nov 25 20:07:14 compute-0 eloquent_ganguly[95090]: --> All data devices are unavailable
Nov 25 20:07:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v51: 7 pgs: 1 creating+peering, 1 peering, 2 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:14 compute-0 systemd[1]: libpod-356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340.scope: Deactivated successfully.
Nov 25 20:07:14 compute-0 podman[95074]: 2025-11-25 20:07:14.722545176 +0000 UTC m=+1.326067579 container died 356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:07:14 compute-0 systemd[1]: libpod-356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340.scope: Consumed 1.106s CPU time.
Nov 25 20:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5be7f789dbfc31b7c02f33889e910e8c4ce308e5221ba50e2cc6c5c83c30f35d-merged.mount: Deactivated successfully.
Nov 25 20:07:14 compute-0 podman[95074]: 2025-11-25 20:07:14.788431838 +0000 UTC m=+1.391954271 container remove 356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:07:14 compute-0 systemd[1]: libpod-conmon-356496e211307247b6db215b188eda73e951e3b11230549825b41dd2d00d1340.scope: Deactivated successfully.
Nov 25 20:07:14 compute-0 sudo[94961]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:14 compute-0 sudo[95197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:14 compute-0 sudo[95197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:14 compute-0 sudo[95197]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:14 compute-0 sudo[95222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:14 compute-0 sudo[95222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:14 compute-0 sudo[95222]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:15 compute-0 sudo[95247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:15 compute-0 sudo[95247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:15 compute-0 sudo[95247]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:15 compute-0 sudo[95272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:07:15 compute-0 sudo[95272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 25 20:07:15 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1409700090' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 20:07:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 25 20:07:15 compute-0 sad_chaum[95137]: enabled application 'rbd' on pool 'vms'
Nov 25 20:07:15 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 25 20:07:15 compute-0 ceph-mon[75144]: osdmap e21: 3 total, 3 up, 3 in
Nov 25 20:07:15 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1409700090' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 20:07:15 compute-0 ceph-mon[75144]: pgmap v51: 7 pgs: 1 creating+peering, 1 peering, 2 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:15 compute-0 systemd[1]: libpod-177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd.scope: Deactivated successfully.
Nov 25 20:07:15 compute-0 podman[95121]: 2025-11-25 20:07:15.353342971 +0000 UTC m=+1.601053900 container died 177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-526a991f07e2e316f5c52a24016f7414861c434a650eae558e7bd03922e082ba-merged.mount: Deactivated successfully.
Nov 25 20:07:15 compute-0 podman[95121]: 2025-11-25 20:07:15.409505034 +0000 UTC m=+1.657215923 container remove 177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd (image=quay.io/ceph/ceph:v18, name=sad_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 20:07:15 compute-0 systemd[1]: libpod-conmon-177816367dc2055511121195803a449c880e3fffcbc807354ded9dd57cfbf9dd.scope: Deactivated successfully.
Nov 25 20:07:15 compute-0 sudo[95118]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.584683621 +0000 UTC m=+0.067261083 container create be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:15 compute-0 sudo[95385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddzcsckyketnvkviiigwqneikpvjprar ; /usr/bin/python3'
Nov 25 20:07:15 compute-0 sudo[95385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:15 compute-0 systemd[1]: Started libpod-conmon-be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43.scope.
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.559845215 +0000 UTC m=+0.042422737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.679302462 +0000 UTC m=+0.161879964 container init be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.688384951 +0000 UTC m=+0.170962413 container start be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.69306555 +0000 UTC m=+0.175643002 container attach be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:07:15 compute-0 inspiring_leakey[95391]: 167 167
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.699751407 +0000 UTC m=+0.182328849 container died be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:07:15 compute-0 systemd[1]: libpod-be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43.scope: Deactivated successfully.
Nov 25 20:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-efded618094ce79cb76430045e05b01ea6a6bd1e4b515d031f6348e0af5917b5-merged.mount: Deactivated successfully.
Nov 25 20:07:15 compute-0 podman[95348]: 2025-11-25 20:07:15.751910402 +0000 UTC m=+0.234487874 container remove be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 20:07:15 compute-0 python3[95390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:15 compute-0 systemd[1]: libpod-conmon-be5b2d680eba21c7d53246e5f00abf9ec722d2b632c82d1c02f11ca9b0965b43.scope: Deactivated successfully.
Nov 25 20:07:15 compute-0 podman[95410]: 2025-11-25 20:07:15.860199787 +0000 UTC m=+0.059756240 container create 23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0 (image=quay.io/ceph/ceph:v18, name=heuristic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:15 compute-0 systemd[1]: Started libpod-conmon-23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0.scope.
Nov 25 20:07:15 compute-0 podman[95410]: 2025-11-25 20:07:15.843300787 +0000 UTC m=+0.042857260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1883a76121d11e954eb171098ed35bede00f8d92c760ca262cf6f1d5261243f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1883a76121d11e954eb171098ed35bede00f8d92c760ca262cf6f1d5261243f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:15 compute-0 podman[95410]: 2025-11-25 20:07:15.960438225 +0000 UTC m=+0.159994698 container init 23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0 (image=quay.io/ceph/ceph:v18, name=heuristic_tharp, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:07:15 compute-0 podman[95410]: 2025-11-25 20:07:15.967452663 +0000 UTC m=+0.167009116 container start 23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0 (image=quay.io/ceph/ceph:v18, name=heuristic_tharp, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:15 compute-0 podman[95410]: 2025-11-25 20:07:15.970680659 +0000 UTC m=+0.170237112 container attach 23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0 (image=quay.io/ceph/ceph:v18, name=heuristic_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:15 compute-0 podman[95431]: 2025-11-25 20:07:15.976652275 +0000 UTC m=+0.068222880 container create b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:07:16 compute-0 systemd[1]: Started libpod-conmon-b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169.scope.
Nov 25 20:07:16 compute-0 podman[95431]: 2025-11-25 20:07:15.942590287 +0000 UTC m=+0.034160952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4389d7d36b63de2facd1038cd399de4bf6383df800f9dec999036222ef07b29e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4389d7d36b63de2facd1038cd399de4bf6383df800f9dec999036222ef07b29e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4389d7d36b63de2facd1038cd399de4bf6383df800f9dec999036222ef07b29e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4389d7d36b63de2facd1038cd399de4bf6383df800f9dec999036222ef07b29e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:16 compute-0 podman[95431]: 2025-11-25 20:07:16.097941226 +0000 UTC m=+0.189511841 container init b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 20:07:16 compute-0 podman[95431]: 2025-11-25 20:07:16.110994022 +0000 UTC m=+0.202564617 container start b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:16 compute-0 podman[95431]: 2025-11-25 20:07:16.115007751 +0000 UTC m=+0.206578346 container attach b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:07:16 compute-0 ceph-mon[75144]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:16 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1409700090' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 20:07:16 compute-0 ceph-mon[75144]: osdmap e22: 3 total, 3 up, 3 in
Nov 25 20:07:16 compute-0 ceph-mon[75144]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 25 20:07:16 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/50086544' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 20:07:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 creating+peering, 1 peering, 1 unknown, 4 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]: {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:     "0": [
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:         {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "devices": [
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "/dev/loop3"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             ],
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_name": "ceph_lv0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_size": "21470642176",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "name": "ceph_lv0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "tags": {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cluster_name": "ceph",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.crush_device_class": "",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.encrypted": "0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osd_id": "0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.type": "block",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.vdo": "0"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             },
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "type": "block",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "vg_name": "ceph_vg0"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:         }
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:     ],
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:     "1": [
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:         {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "devices": [
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "/dev/loop4"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             ],
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_name": "ceph_lv1",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_size": "21470642176",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "name": "ceph_lv1",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "tags": {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cluster_name": "ceph",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.crush_device_class": "",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.encrypted": "0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osd_id": "1",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.type": "block",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.vdo": "0"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             },
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "type": "block",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "vg_name": "ceph_vg1"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:         }
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:     ],
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:     "2": [
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:         {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "devices": [
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "/dev/loop5"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             ],
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_name": "ceph_lv2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_size": "21470642176",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "name": "ceph_lv2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "tags": {
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.cluster_name": "ceph",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.crush_device_class": "",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.encrypted": "0",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osd_id": "2",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.type": "block",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:                 "ceph.vdo": "0"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             },
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "type": "block",
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:             "vg_name": "ceph_vg2"
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:         }
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]:     ]
Nov 25 20:07:16 compute-0 vigorous_rosalind[95452]: }
Nov 25 20:07:16 compute-0 systemd[1]: libpod-b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169.scope: Deactivated successfully.
Nov 25 20:07:16 compute-0 podman[95431]: 2025-11-25 20:07:16.923514887 +0000 UTC m=+1.015085472 container died b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4389d7d36b63de2facd1038cd399de4bf6383df800f9dec999036222ef07b29e-merged.mount: Deactivated successfully.
Nov 25 20:07:16 compute-0 podman[95431]: 2025-11-25 20:07:16.999168877 +0000 UTC m=+1.090739442 container remove b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:17 compute-0 systemd[1]: libpod-conmon-b9d38286b7a4cefc524ec4a71ecc57a599b40a6fa725e399a56eabe9b0e76169.scope: Deactivated successfully.
Nov 25 20:07:17 compute-0 sudo[95272]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:17 compute-0 sudo[95495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:17 compute-0 sudo[95495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:17 compute-0 sudo[95495]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:17 compute-0 sudo[95520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:17 compute-0 sudo[95520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:17 compute-0 sudo[95520]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:17 compute-0 sudo[95545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:17 compute-0 sudo[95545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:17 compute-0 sudo[95545]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 25 20:07:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/50086544' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 20:07:17 compute-0 ceph-mon[75144]: pgmap v53: 7 pgs: 1 creating+peering, 1 peering, 1 unknown, 4 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/50086544' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 20:07:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 25 20:07:17 compute-0 heuristic_tharp[95433]: enabled application 'rbd' on pool 'volumes'
Nov 25 20:07:17 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 25 20:07:17 compute-0 systemd[1]: libpod-23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0.scope: Deactivated successfully.
Nov 25 20:07:17 compute-0 podman[95410]: 2025-11-25 20:07:17.392345327 +0000 UTC m=+1.591901810 container died 23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0 (image=quay.io/ceph/ceph:v18, name=heuristic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1883a76121d11e954eb171098ed35bede00f8d92c760ca262cf6f1d5261243f-merged.mount: Deactivated successfully.
Nov 25 20:07:17 compute-0 sudo[95570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:07:17 compute-0 sudo[95570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:17 compute-0 podman[95410]: 2025-11-25 20:07:17.444436699 +0000 UTC m=+1.643993182 container remove 23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0 (image=quay.io/ceph/ceph:v18, name=heuristic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:07:17 compute-0 systemd[1]: libpod-conmon-23a4dd559c1d8c51c3acd136236115ab8ff75937e5b6eb8f5381b9897ff531c0.scope: Deactivated successfully.
Nov 25 20:07:17 compute-0 sudo[95385]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:17 compute-0 sudo[95642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxhfegkhoqugvfxuhyatjeqpppmhcztn ; /usr/bin/python3'
Nov 25 20:07:17 compute-0 sudo[95642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:17 compute-0 podman[95671]: 2025-11-25 20:07:17.885161527 +0000 UTC m=+0.066460819 container create 462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:17 compute-0 systemd[1]: Started libpod-conmon-462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0.scope.
Nov 25 20:07:17 compute-0 podman[95671]: 2025-11-25 20:07:17.858975392 +0000 UTC m=+0.040274744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:17 compute-0 podman[95671]: 2025-11-25 20:07:17.990510616 +0000 UTC m=+0.171809948 container init 462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:07:17 compute-0 podman[95671]: 2025-11-25 20:07:17.999624226 +0000 UTC m=+0.180923578 container start 462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:07:18 compute-0 kind_ellis[95687]: 167 167
Nov 25 20:07:18 compute-0 podman[95671]: 2025-11-25 20:07:18.003916603 +0000 UTC m=+0.185215865 container attach 462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:18 compute-0 systemd[1]: libpod-462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0.scope: Deactivated successfully.
Nov 25 20:07:18 compute-0 podman[95671]: 2025-11-25 20:07:18.005355795 +0000 UTC m=+0.186655097 container died 462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f5c7299c7ef6bc05ebd45b06f6ced179f9bfad9c657e104f07dd8f6d1a9400-merged.mount: Deactivated successfully.
Nov 25 20:07:18 compute-0 podman[95671]: 2025-11-25 20:07:18.058632513 +0000 UTC m=+0.239931775 container remove 462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:18 compute-0 systemd[1]: libpod-conmon-462ca5587ef38a37d43f98479fd9095b95589a479439703e057306f201d2c9e0.scope: Deactivated successfully.
Nov 25 20:07:18 compute-0 python3[95650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:18 compute-0 podman[95705]: 2025-11-25 20:07:18.200225695 +0000 UTC m=+0.062614575 container create 015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e (image=quay.io/ceph/ceph:v18, name=eager_einstein, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:18 compute-0 systemd[1]: Started libpod-conmon-015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e.scope.
Nov 25 20:07:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33fdd1b9ef86cf36a47d2f5ea2aca9a274281096db58967ee617c5159733090d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33fdd1b9ef86cf36a47d2f5ea2aca9a274281096db58967ee617c5159733090d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:18 compute-0 podman[95705]: 2025-11-25 20:07:18.167840826 +0000 UTC m=+0.030229756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:18 compute-0 podman[95705]: 2025-11-25 20:07:18.260703735 +0000 UTC m=+0.123092585 container init 015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e (image=quay.io/ceph/ceph:v18, name=eager_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:07:18 compute-0 podman[95705]: 2025-11-25 20:07:18.266884678 +0000 UTC m=+0.129273528 container start 015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e (image=quay.io/ceph/ceph:v18, name=eager_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:18 compute-0 podman[95722]: 2025-11-25 20:07:18.267677592 +0000 UTC m=+0.057309318 container create 817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:07:18 compute-0 podman[95705]: 2025-11-25 20:07:18.271223577 +0000 UTC m=+0.133612427 container attach 015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e (image=quay.io/ceph/ceph:v18, name=eager_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:18 compute-0 systemd[1]: Started libpod-conmon-817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236.scope.
Nov 25 20:07:18 compute-0 podman[95722]: 2025-11-25 20:07:18.239968262 +0000 UTC m=+0.029599998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032f13ae7fcaa21287e552e96861acfcbd9693489af380e5582c514b46621d63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032f13ae7fcaa21287e552e96861acfcbd9693489af380e5582c514b46621d63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032f13ae7fcaa21287e552e96861acfcbd9693489af380e5582c514b46621d63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032f13ae7fcaa21287e552e96861acfcbd9693489af380e5582c514b46621d63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/50086544' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 20:07:18 compute-0 ceph-mon[75144]: osdmap e23: 3 total, 3 up, 3 in
Nov 25 20:07:18 compute-0 podman[95722]: 2025-11-25 20:07:18.397951398 +0000 UTC m=+0.187583204 container init 817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:18 compute-0 podman[95722]: 2025-11-25 20:07:18.410455029 +0000 UTC m=+0.200086755 container start 817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:18 compute-0 podman[95722]: 2025-11-25 20:07:18.413703705 +0000 UTC m=+0.203335461 container attach 817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:07:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s, 0 objects/s recovering
Nov 25 20:07:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 25 20:07:18 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/755121945' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 20:07:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 25 20:07:19 compute-0 ceph-mon[75144]: pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s, 0 objects/s recovering
Nov 25 20:07:19 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/755121945' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 20:07:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/755121945' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 20:07:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 25 20:07:19 compute-0 eager_einstein[95737]: enabled application 'rbd' on pool 'backups'
Nov 25 20:07:19 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]: {
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "osd_id": 2,
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "type": "bluestore"
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:     },
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "osd_id": 1,
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "type": "bluestore"
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:     },
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "osd_id": 0,
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:         "type": "bluestore"
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]:     }
Nov 25 20:07:19 compute-0 ecstatic_easley[95745]: }
Nov 25 20:07:19 compute-0 systemd[1]: libpod-015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e.scope: Deactivated successfully.
Nov 25 20:07:19 compute-0 podman[95705]: 2025-11-25 20:07:19.407243099 +0000 UTC m=+1.269631949 container died 015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e (image=quay.io/ceph/ceph:v18, name=eager_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-33fdd1b9ef86cf36a47d2f5ea2aca9a274281096db58967ee617c5159733090d-merged.mount: Deactivated successfully.
Nov 25 20:07:19 compute-0 systemd[1]: libpod-817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236.scope: Deactivated successfully.
Nov 25 20:07:19 compute-0 podman[95722]: 2025-11-25 20:07:19.440307207 +0000 UTC m=+1.229938933 container died 817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:19 compute-0 systemd[1]: libpod-817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236.scope: Consumed 1.035s CPU time.
Nov 25 20:07:19 compute-0 podman[95705]: 2025-11-25 20:07:19.453150038 +0000 UTC m=+1.315538878 container remove 015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e (image=quay.io/ceph/ceph:v18, name=eager_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 20:07:19 compute-0 systemd[1]: libpod-conmon-015652e32c113595dab767871289a9fe13142363c5a6eba3662a6aa8a4bc278e.scope: Deactivated successfully.
Nov 25 20:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-032f13ae7fcaa21287e552e96861acfcbd9693489af380e5582c514b46621d63-merged.mount: Deactivated successfully.
Nov 25 20:07:19 compute-0 sudo[95642]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 podman[95722]: 2025-11-25 20:07:19.496194122 +0000 UTC m=+1.285825848 container remove 817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:07:19 compute-0 systemd[1]: libpod-conmon-817e9e98dbedbcc921e42751d00f4eed343454e063f4bb1351bc6babb34c6236.scope: Deactivated successfully.
Nov 25 20:07:19 compute-0 sudo[95570]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:19 compute-0 sudo[95822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:19 compute-0 sudo[95822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:19 compute-0 sudo[95868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcxhbzdqisrvzldmealjkafhltnfnzgs ; /usr/bin/python3'
Nov 25 20:07:19 compute-0 sudo[95822]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 sudo[95868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:19 compute-0 sudo[95873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:07:19 compute-0 sudo[95873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:19 compute-0 sudo[95873]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 sudo[95898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:19 compute-0 sudo[95898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:19 compute-0 sudo[95898]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 python3[95872]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:19 compute-0 sudo[95923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:19 compute-0 sudo[95923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:19 compute-0 sudo[95923]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 podman[95935]: 2025-11-25 20:07:19.900720819 +0000 UTC m=+0.066118709 container create 48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb (image=quay.io/ceph/ceph:v18, name=quizzical_proskuriakova, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:07:19 compute-0 systemd[1]: Started libpod-conmon-48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb.scope.
Nov 25 20:07:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8152c8e72520d53c109bcd06ba8cc09947f1e063a441fbf4cdb7b6b654a5b7b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8152c8e72520d53c109bcd06ba8cc09947f1e063a441fbf4cdb7b6b654a5b7b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:19 compute-0 sudo[95961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:19 compute-0 podman[95935]: 2025-11-25 20:07:19.883297122 +0000 UTC m=+0.048695022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:19 compute-0 sudo[95961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:19 compute-0 sudo[95961]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:19 compute-0 podman[95935]: 2025-11-25 20:07:19.987634222 +0000 UTC m=+0.153032372 container init 48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb (image=quay.io/ceph/ceph:v18, name=quizzical_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:19 compute-0 podman[95935]: 2025-11-25 20:07:19.9973976 +0000 UTC m=+0.162795520 container start 48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb (image=quay.io/ceph/ceph:v18, name=quizzical_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:20 compute-0 podman[95935]: 2025-11-25 20:07:20.001703208 +0000 UTC m=+0.167101118 container attach 48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb (image=quay.io/ceph/ceph:v18, name=quizzical_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:07:20 compute-0 sudo[95992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:07:20 compute-0 sudo[95992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:20 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/755121945' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 20:07:20 compute-0 ceph-mon[75144]: osdmap e24: 3 total, 3 up, 3 in
Nov 25 20:07:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 25 20:07:20 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2487993002' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 20:07:20 compute-0 podman[96110]: 2025-11-25 20:07:20.62968324 +0000 UTC m=+0.062156121 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s, 0 objects/s recovering
Nov 25 20:07:20 compute-0 podman[96110]: 2025-11-25 20:07:20.745188939 +0000 UTC m=+0.177661830 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:07:21 compute-0 ceph-mon[75144]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:21 compute-0 sudo[95992]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:21 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:21 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 25 20:07:21 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2487993002' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 20:07:21 compute-0 ceph-mon[75144]: pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s, 0 objects/s recovering
Nov 25 20:07:21 compute-0 ceph-mon[75144]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:21 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:21 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:21 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2487993002' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 20:07:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 20:07:21 compute-0 quizzical_proskuriakova[95976]: enabled application 'rbd' on pool 'images'
Nov 25 20:07:21 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 25 20:07:21 compute-0 systemd[1]: libpod-48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb.scope: Deactivated successfully.
Nov 25 20:07:21 compute-0 podman[95935]: 2025-11-25 20:07:21.431554589 +0000 UTC m=+1.596952509 container died 48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb (image=quay.io/ceph/ceph:v18, name=quizzical_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:07:21 compute-0 sudo[96232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:21 compute-0 sudo[96232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8152c8e72520d53c109bcd06ba8cc09947f1e063a441fbf4cdb7b6b654a5b7b6-merged.mount: Deactivated successfully.
Nov 25 20:07:21 compute-0 sudo[96232]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:21 compute-0 podman[95935]: 2025-11-25 20:07:21.489478104 +0000 UTC m=+1.654876024 container remove 48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb (image=quay.io/ceph/ceph:v18, name=quizzical_proskuriakova, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:07:21 compute-0 systemd[1]: libpod-conmon-48792d0602a802aa616dc7a24590a074d57f55a489a759c54bd4e3655a1a63bb.scope: Deactivated successfully.
Nov 25 20:07:21 compute-0 sudo[95868]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:21 compute-0 sudo[96269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:21 compute-0 sudo[96269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:21 compute-0 sudo[96269]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:21 compute-0 sudo[96294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:21 compute-0 sudo[96294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:21 compute-0 sudo[96294]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:21 compute-0 sudo[96342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmzxiyvjuiuumambvplmqcutzyawqkmn ; /usr/bin/python3'
Nov 25 20:07:21 compute-0 sudo[96342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:21 compute-0 sudo[96343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:07:21 compute-0 sudo[96343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:21 compute-0 python3[96358]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:21 compute-0 podman[96370]: 2025-11-25 20:07:21.905712327 +0000 UTC m=+0.068315364 container create 6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f (image=quay.io/ceph/ceph:v18, name=trusting_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:21 compute-0 systemd[1]: Started libpod-conmon-6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f.scope.
Nov 25 20:07:21 compute-0 podman[96370]: 2025-11-25 20:07:21.879589934 +0000 UTC m=+0.042193011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db13bce81bc96db74e9d27a5bf67a441172fbe1b0263939eadd2c8c19edbab5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db13bce81bc96db74e9d27a5bf67a441172fbe1b0263939eadd2c8c19edbab5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:22 compute-0 podman[96370]: 2025-11-25 20:07:22.003898263 +0000 UTC m=+0.166501280 container init 6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f (image=quay.io/ceph/ceph:v18, name=trusting_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:07:22 compute-0 podman[96370]: 2025-11-25 20:07:22.014359873 +0000 UTC m=+0.176962880 container start 6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f (image=quay.io/ceph/ceph:v18, name=trusting_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:22 compute-0 podman[96370]: 2025-11-25 20:07:22.01764323 +0000 UTC m=+0.180246237 container attach 6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f (image=quay.io/ceph/ceph:v18, name=trusting_nash, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:07:22 compute-0 sudo[96343]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:22 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev ddbe7cc7-62c8-48f8-ab3d-bab96abf715f does not exist
Nov 25 20:07:22 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 32a255a4-83d8-4dad-afba-4bcf116c7ca6 does not exist
Nov 25 20:07:22 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 1de8f1f4-6363-4650-8c7b-73805b304173 does not exist
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:22 compute-0 sudo[96420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:22 compute-0 sudo[96420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:22 compute-0 sudo[96420]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:22 compute-0 sudo[96447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:22 compute-0 sudo[96447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:22 compute-0 sudo[96447]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2487993002' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 20:07:22 compute-0 ceph-mon[75144]: osdmap e25: 3 total, 3 up, 3 in
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:07:22 compute-0 sudo[96489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:22 compute-0 sudo[96489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:22 compute-0 sudo[96489]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:22 compute-0 sudo[96514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:07:22 compute-0 sudo[96514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 25 20:07:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650394953' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 20:07:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s, 0 objects/s recovering
Nov 25 20:07:22 compute-0 podman[96581]: 2025-11-25 20:07:22.955483476 +0000 UTC m=+0.060753360 container create 4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_boyd, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:07:22 compute-0 systemd[1]: Started libpod-conmon-4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066.scope.
Nov 25 20:07:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:23 compute-0 podman[96581]: 2025-11-25 20:07:22.931564388 +0000 UTC m=+0.036834352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:23 compute-0 podman[96581]: 2025-11-25 20:07:23.032838746 +0000 UTC m=+0.138108650 container init 4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:23 compute-0 podman[96581]: 2025-11-25 20:07:23.041498682 +0000 UTC m=+0.146768616 container start 4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:23 compute-0 condescending_boyd[96597]: 167 167
Nov 25 20:07:23 compute-0 podman[96581]: 2025-11-25 20:07:23.045696646 +0000 UTC m=+0.150966530 container attach 4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:07:23 compute-0 systemd[1]: libpod-4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066.scope: Deactivated successfully.
Nov 25 20:07:23 compute-0 podman[96581]: 2025-11-25 20:07:23.047095568 +0000 UTC m=+0.152365482 container died 4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:07:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6ab95dc6fd2771268da1a761b7fdf305204515894ec37746da70d7bbd3ed50-merged.mount: Deactivated successfully.
Nov 25 20:07:23 compute-0 podman[96581]: 2025-11-25 20:07:23.083046793 +0000 UTC m=+0.188316707 container remove 4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:07:23 compute-0 systemd[1]: libpod-conmon-4261c1128c33300f2eeba23ae895d36f4065daa3adafdf7568a2a59a56e06066.scope: Deactivated successfully.
Nov 25 20:07:23 compute-0 podman[96621]: 2025-11-25 20:07:23.264558746 +0000 UTC m=+0.048771805 container create 09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:07:23 compute-0 systemd[1]: Started libpod-conmon-09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61.scope.
Nov 25 20:07:23 compute-0 podman[96621]: 2025-11-25 20:07:23.244323187 +0000 UTC m=+0.028536226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb84b4d0c5993c54fce1c6b7db9209ea2887e40cf1225923dd81f526cc871b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb84b4d0c5993c54fce1c6b7db9209ea2887e40cf1225923dd81f526cc871b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb84b4d0c5993c54fce1c6b7db9209ea2887e40cf1225923dd81f526cc871b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb84b4d0c5993c54fce1c6b7db9209ea2887e40cf1225923dd81f526cc871b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb84b4d0c5993c54fce1c6b7db9209ea2887e40cf1225923dd81f526cc871b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:23 compute-0 podman[96621]: 2025-11-25 20:07:23.357968282 +0000 UTC m=+0.142181311 container init 09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:23 compute-0 podman[96621]: 2025-11-25 20:07:23.365873756 +0000 UTC m=+0.150086825 container start 09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:07:23 compute-0 podman[96621]: 2025-11-25 20:07:23.369859163 +0000 UTC m=+0.154072202 container attach 09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:07:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 25 20:07:23 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2650394953' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 20:07:23 compute-0 ceph-mon[75144]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s, 0 objects/s recovering
Nov 25 20:07:23 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650394953' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 20:07:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 20:07:23 compute-0 trusting_nash[96399]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 25 20:07:23 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 25 20:07:23 compute-0 systemd[1]: libpod-6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f.scope: Deactivated successfully.
Nov 25 20:07:23 compute-0 podman[96370]: 2025-11-25 20:07:23.445509543 +0000 UTC m=+1.608112540 container died 6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f (image=quay.io/ceph/ceph:v18, name=trusting_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:07:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8db13bce81bc96db74e9d27a5bf67a441172fbe1b0263939eadd2c8c19edbab5-merged.mount: Deactivated successfully.
Nov 25 20:07:23 compute-0 podman[96370]: 2025-11-25 20:07:23.496250565 +0000 UTC m=+1.658853562 container remove 6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f (image=quay.io/ceph/ceph:v18, name=trusting_nash, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:23 compute-0 systemd[1]: libpod-conmon-6da1c53db7178db52f58da1bff57580e89798732d3e6d71df4a084822010859f.scope: Deactivated successfully.
Nov 25 20:07:23 compute-0 sudo[96342]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:23 compute-0 sudo[96678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvysuwragtjiqqbahyugjfffyexsqjkk ; /usr/bin/python3'
Nov 25 20:07:23 compute-0 sudo[96678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:23 compute-0 python3[96680]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:23 compute-0 podman[96681]: 2025-11-25 20:07:23.993873268 +0000 UTC m=+0.075001392 container create 4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c (image=quay.io/ceph/ceph:v18, name=busy_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:24 compute-0 systemd[1]: Started libpod-conmon-4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c.scope.
Nov 25 20:07:24 compute-0 podman[96681]: 2025-11-25 20:07:23.964064865 +0000 UTC m=+0.045193039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5c791395da001f5337dcffd1df0c7d6292500ce5417a6c8fe39077e998532e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5c791395da001f5337dcffd1df0c7d6292500ce5417a6c8fe39077e998532e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:24 compute-0 podman[96681]: 2025-11-25 20:07:24.10101097 +0000 UTC m=+0.182139144 container init 4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c (image=quay.io/ceph/ceph:v18, name=busy_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:24 compute-0 podman[96681]: 2025-11-25 20:07:24.110980915 +0000 UTC m=+0.192109029 container start 4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c (image=quay.io/ceph/ceph:v18, name=busy_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:24 compute-0 podman[96681]: 2025-11-25 20:07:24.114790608 +0000 UTC m=+0.195918732 container attach 4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c (image=quay.io/ceph/ceph:v18, name=busy_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:07:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2650394953' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 20:07:24 compute-0 ceph-mon[75144]: osdmap e26: 3 total, 3 up, 3 in
Nov 25 20:07:24 compute-0 reverent_babbage[96638]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:07:24 compute-0 reverent_babbage[96638]: --> relative data size: 1.0
Nov 25 20:07:24 compute-0 reverent_babbage[96638]: --> All data devices are unavailable
Nov 25 20:07:24 compute-0 systemd[1]: libpod-09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61.scope: Deactivated successfully.
Nov 25 20:07:24 compute-0 podman[96621]: 2025-11-25 20:07:24.552843796 +0000 UTC m=+1.337056835 container died 09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:24 compute-0 systemd[1]: libpod-09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61.scope: Consumed 1.108s CPU time.
Nov 25 20:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8efb84b4d0c5993c54fce1c6b7db9209ea2887e40cf1225923dd81f526cc871b-merged.mount: Deactivated successfully.
Nov 25 20:07:24 compute-0 podman[96621]: 2025-11-25 20:07:24.628485405 +0000 UTC m=+1.412698434 container remove 09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:24 compute-0 systemd[1]: libpod-conmon-09cfdfea8c4f31665447c89407ac8147309e8881f57544645db4d95789c34c61.scope: Deactivated successfully.
Nov 25 20:07:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 25 20:07:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1904029735' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 20:07:24 compute-0 sudo[96514]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:24 compute-0 sudo[96756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:24 compute-0 sudo[96756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:24 compute-0 sudo[96756]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:24 compute-0 sudo[96781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:24 compute-0 sudo[96781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:24 compute-0 sudo[96781]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:24 compute-0 sudo[96806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:24 compute-0 sudo[96806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:24 compute-0 sudo[96806]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:25 compute-0 sudo[96831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:07:25 compute-0 sudo[96831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 25 20:07:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1904029735' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 20:07:25 compute-0 ceph-mon[75144]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1904029735' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 20:07:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 20:07:25 compute-0 busy_faraday[96700]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 25 20:07:25 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 25 20:07:25 compute-0 systemd[1]: libpod-4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c.scope: Deactivated successfully.
Nov 25 20:07:25 compute-0 podman[96681]: 2025-11-25 20:07:25.469407372 +0000 UTC m=+1.550535536 container died 4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c (image=quay.io/ceph/ceph:v18, name=busy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f5c791395da001f5337dcffd1df0c7d6292500ce5417a6c8fe39077e998532e-merged.mount: Deactivated successfully.
Nov 25 20:07:25 compute-0 podman[96681]: 2025-11-25 20:07:25.525657587 +0000 UTC m=+1.606785721 container remove 4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c (image=quay.io/ceph/ceph:v18, name=busy_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:07:25 compute-0 systemd[1]: libpod-conmon-4e71ccaef2732a9740f6626176d20c50971c661f7919bb7cc59639cdee33e30c.scope: Deactivated successfully.
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.555665905 +0000 UTC m=+0.097174747 container create 2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:25 compute-0 sudo[96678]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:25 compute-0 systemd[1]: Started libpod-conmon-2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a.scope.
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.524713298 +0000 UTC m=+0.066222170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.634453537 +0000 UTC m=+0.175962379 container init 2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.645545476 +0000 UTC m=+0.187054318 container start 2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.649118012 +0000 UTC m=+0.190626854 container attach 2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:07:25 compute-0 interesting_bassi[96927]: 167 167
Nov 25 20:07:25 compute-0 systemd[1]: libpod-2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a.scope: Deactivated successfully.
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.650255205 +0000 UTC m=+0.191764057 container died 2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a3c19e91947ec66254bd78664a9c0a7bfc16c404658ec4daf9b38e10c0fb348-merged.mount: Deactivated successfully.
Nov 25 20:07:25 compute-0 podman[96896]: 2025-11-25 20:07:25.680995325 +0000 UTC m=+0.222504157 container remove 2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:07:25 compute-0 systemd[1]: libpod-conmon-2384352f7f19b241a507e2047125e3bb3031f8c490e2b18b84bb8df88d04ee4a.scope: Deactivated successfully.
Nov 25 20:07:25 compute-0 podman[96950]: 2025-11-25 20:07:25.830090159 +0000 UTC m=+0.039092888 container create 5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:25 compute-0 systemd[1]: Started libpod-conmon-5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314.scope.
Nov 25 20:07:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428fe42da833a06d808f878484223f94e9410936c7c3324b43aab3504f0a908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428fe42da833a06d808f878484223f94e9410936c7c3324b43aab3504f0a908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428fe42da833a06d808f878484223f94e9410936c7c3324b43aab3504f0a908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428fe42da833a06d808f878484223f94e9410936c7c3324b43aab3504f0a908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:25 compute-0 podman[96950]: 2025-11-25 20:07:25.813300032 +0000 UTC m=+0.022302741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:25 compute-0 podman[96950]: 2025-11-25 20:07:25.915526869 +0000 UTC m=+0.124529598 container init 5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:07:25 compute-0 podman[96950]: 2025-11-25 20:07:25.921422694 +0000 UTC m=+0.130425423 container start 5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:25 compute-0 podman[96950]: 2025-11-25 20:07:25.926105283 +0000 UTC m=+0.135108012 container attach 5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:26 compute-0 ceph-mon[75144]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:26 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1904029735' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 20:07:26 compute-0 ceph-mon[75144]: osdmap e27: 3 total, 3 up, 3 in
Nov 25 20:07:26 compute-0 ceph-mon[75144]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]: {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:     "0": [
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:         {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "devices": [
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "/dev/loop3"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             ],
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_name": "ceph_lv0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_size": "21470642176",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "name": "ceph_lv0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "tags": {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cluster_name": "ceph",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.crush_device_class": "",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.encrypted": "0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osd_id": "0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.type": "block",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.vdo": "0"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             },
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "type": "block",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "vg_name": "ceph_vg0"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:         }
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:     ],
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:     "1": [
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:         {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "devices": [
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "/dev/loop4"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             ],
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_name": "ceph_lv1",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_size": "21470642176",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "name": "ceph_lv1",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "tags": {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cluster_name": "ceph",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.crush_device_class": "",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.encrypted": "0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osd_id": "1",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.type": "block",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.vdo": "0"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             },
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "type": "block",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "vg_name": "ceph_vg1"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:         }
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:     ],
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:     "2": [
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:         {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "devices": [
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "/dev/loop5"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             ],
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_name": "ceph_lv2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_size": "21470642176",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "name": "ceph_lv2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "tags": {
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.cluster_name": "ceph",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.crush_device_class": "",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.encrypted": "0",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osd_id": "2",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.type": "block",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:                 "ceph.vdo": "0"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             },
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "type": "block",
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:             "vg_name": "ceph_vg2"
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:         }
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]:     ]
Nov 25 20:07:26 compute-0 unruffled_hugle[96967]: }
Nov 25 20:07:26 compute-0 systemd[1]: libpod-5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314.scope: Deactivated successfully.
Nov 25 20:07:26 compute-0 podman[96950]: 2025-11-25 20:07:26.683910737 +0000 UTC m=+0.892913436 container died 5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5428fe42da833a06d808f878484223f94e9410936c7c3324b43aab3504f0a908-merged.mount: Deactivated successfully.
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:26 compute-0 podman[96950]: 2025-11-25 20:07:26.740737349 +0000 UTC m=+0.949740038 container remove 5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:26 compute-0 systemd[1]: libpod-conmon-5add4da7db95f4703613a63fe8a7d7e13b71a362d898f9f21de4d1dbc17d5314.scope: Deactivated successfully.
Nov 25 20:07:26 compute-0 sudo[96831]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:07:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:07:26 compute-0 sudo[96988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:26 compute-0 sudo[96988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:26 compute-0 sudo[96988]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:26 compute-0 sudo[97013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:07:26 compute-0 sudo[97013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:26 compute-0 sudo[97013]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:27 compute-0 sudo[97038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:27 compute-0 sudo[97038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:27 compute-0 sudo[97038]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:27 compute-0 sudo[97063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:07:27 compute-0 sudo[97063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:27 compute-0 sudo[97202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehluorgfstmymyrntyqhfjautczvejqz ; /usr/bin/python3'
Nov 25 20:07:27 compute-0 sudo[97202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:27 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 20:07:27 compute-0 ceph-mon[75144]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 20:07:27 compute-0 ceph-mon[75144]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.479605564 +0000 UTC m=+0.048503487 container create 878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:27 compute-0 systemd[1]: Started libpod-conmon-878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2.scope.
Nov 25 20:07:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.462150997 +0000 UTC m=+0.031048940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.565378044 +0000 UTC m=+0.134275987 container init 878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.57641369 +0000 UTC m=+0.145311643 container start 878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:07:27 compute-0 hopeful_bohr[97222]: 167 167
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.580565183 +0000 UTC m=+0.149463126 container attach 878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:27 compute-0 systemd[1]: libpod-878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2.scope: Deactivated successfully.
Nov 25 20:07:27 compute-0 conmon[97222]: conmon 878289ca9b08680ec989 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2.scope/container/memory.events
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.583367226 +0000 UTC m=+0.152265179 container died 878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a86ee9961b587048530fabae47a2635a672bad1d44a3b8cbc2a1494936c734f5-merged.mount: Deactivated successfully.
Nov 25 20:07:27 compute-0 python3[97211]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 20:07:27 compute-0 podman[97203]: 2025-11-25 20:07:27.628521512 +0000 UTC m=+0.197419425 container remove 878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:07:27 compute-0 sudo[97202]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:27 compute-0 systemd[1]: libpod-conmon-878289ca9b08680ec989d073f47e1713825df0e7a9961ef8765ab9e1e8d365f2.scope: Deactivated successfully.
Nov 25 20:07:27 compute-0 podman[97271]: 2025-11-25 20:07:27.823775833 +0000 UTC m=+0.045615481 container create b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:27 compute-0 systemd[1]: Started libpod-conmon-b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1.scope.
Nov 25 20:07:27 compute-0 sudo[97331]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmvusktwqpswgixupttjbwhizokvbsp ; /usr/bin/python3'
Nov 25 20:07:27 compute-0 sudo[97331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:27 compute-0 podman[97271]: 2025-11-25 20:07:27.805352568 +0000 UTC m=+0.027192226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:07:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7efa441ed3df1a0c968d40be3459d630dc732834056d5d6660418d68275b947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7efa441ed3df1a0c968d40be3459d630dc732834056d5d6660418d68275b947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7efa441ed3df1a0c968d40be3459d630dc732834056d5d6660418d68275b947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7efa441ed3df1a0c968d40be3459d630dc732834056d5d6660418d68275b947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:27 compute-0 podman[97271]: 2025-11-25 20:07:27.928614647 +0000 UTC m=+0.150454305 container init b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:27 compute-0 podman[97271]: 2025-11-25 20:07:27.936050977 +0000 UTC m=+0.157890605 container start b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:07:27 compute-0 podman[97271]: 2025-11-25 20:07:27.943839098 +0000 UTC m=+0.165678766 container attach b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:07:28 compute-0 python3[97336]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101247.2619255-36712-135722929858730/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=86ed7b0354b7c5dc128f5b75fa89f43fbe905230 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:28 compute-0 sudo[97331]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:28 compute-0 sudo[97388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nerqqixtyyecrmxztczatlsuxtclrulh ; /usr/bin/python3'
Nov 25 20:07:28 compute-0 sudo[97388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:28 compute-0 ceph-mon[75144]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 20:07:28 compute-0 ceph-mon[75144]: Cluster is now healthy
Nov 25 20:07:28 compute-0 python3[97390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:28 compute-0 podman[97399]: 2025-11-25 20:07:28.704292261 +0000 UTC m=+0.053716421 container create 6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e (image=quay.io/ceph/ceph:v18, name=serene_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:28 compute-0 systemd[1]: Started libpod-conmon-6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e.scope.
Nov 25 20:07:28 compute-0 podman[97399]: 2025-11-25 20:07:28.68364517 +0000 UTC m=+0.033069380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b5a1a7aff471a31ef7f820c471d491a1a4c45da44e56bf3662b916549ef2e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b5a1a7aff471a31ef7f820c471d491a1a4c45da44e56bf3662b916549ef2e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:28 compute-0 podman[97399]: 2025-11-25 20:07:28.805954122 +0000 UTC m=+0.155378342 container init 6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e (image=quay.io/ceph/ceph:v18, name=serene_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:28 compute-0 podman[97399]: 2025-11-25 20:07:28.81606965 +0000 UTC m=+0.165493850 container start 6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e (image=quay.io/ceph/ceph:v18, name=serene_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:28 compute-0 podman[97399]: 2025-11-25 20:07:28.819718048 +0000 UTC m=+0.169142298 container attach 6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e (image=quay.io/ceph/ceph:v18, name=serene_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]: {
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "osd_id": 2,
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "type": "bluestore"
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:     },
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "osd_id": 1,
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "type": "bluestore"
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:     },
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "osd_id": 0,
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:         "type": "bluestore"
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]:     }
Nov 25 20:07:28 compute-0 relaxed_williamson[97332]: }
Nov 25 20:07:28 compute-0 systemd[1]: libpod-b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1.scope: Deactivated successfully.
Nov 25 20:07:28 compute-0 podman[97436]: 2025-11-25 20:07:28.983418335 +0000 UTC m=+0.033346998 container died b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7efa441ed3df1a0c968d40be3459d630dc732834056d5d6660418d68275b947-merged.mount: Deactivated successfully.
Nov 25 20:07:29 compute-0 podman[97436]: 2025-11-25 20:07:29.03495279 +0000 UTC m=+0.084881433 container remove b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:29 compute-0 systemd[1]: libpod-conmon-b40dfaf8f024656e78ee97df015b911d3fd58ca2daf52d50c9882e6d445facf1.scope: Deactivated successfully.
Nov 25 20:07:29 compute-0 sudo[97063]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:07:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:07:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:29 compute-0 sudo[97453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:07:29 compute-0 sudo[97453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:29 compute-0 sudo[97453]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:29 compute-0 sudo[97495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:07:29 compute-0 sudo[97495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:07:29 compute-0 sudo[97495]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 25 20:07:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/513359946' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 20:07:29 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/513359946' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 20:07:29 compute-0 systemd[1]: libpod-6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e.scope: Deactivated successfully.
Nov 25 20:07:29 compute-0 podman[97399]: 2025-11-25 20:07:29.435635753 +0000 UTC m=+0.785059963 container died 6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e (image=quay.io/ceph/ceph:v18, name=serene_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b5a1a7aff471a31ef7f820c471d491a1a4c45da44e56bf3662b916549ef2e1-merged.mount: Deactivated successfully.
Nov 25 20:07:29 compute-0 ceph-mon[75144]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:29 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:07:29 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/513359946' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 20:07:29 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/513359946' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 20:07:29 compute-0 podman[97399]: 2025-11-25 20:07:29.495510476 +0000 UTC m=+0.844934656 container remove 6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e (image=quay.io/ceph/ceph:v18, name=serene_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:07:29 compute-0 systemd[1]: libpod-conmon-6ef0a34978ffc0282e7654a06c777ec3a22e63c90fe97565a97c9cbf1f194e3e.scope: Deactivated successfully.
Nov 25 20:07:29 compute-0 sudo[97388]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:30 compute-0 sudo[97560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxycxjkxjdascigopsohcrjpgqsjxhf ; /usr/bin/python3'
Nov 25 20:07:30 compute-0 sudo[97560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:31 compute-0 python3[97562]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:31 compute-0 podman[97564]: 2025-11-25 20:07:31.258391427 +0000 UTC m=+0.074179717 container create 364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4 (image=quay.io/ceph/ceph:v18, name=jovial_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:07:31 compute-0 systemd[1]: Started libpod-conmon-364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4.scope.
Nov 25 20:07:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:31 compute-0 podman[97564]: 2025-11-25 20:07:31.230910893 +0000 UTC m=+0.046699253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb2b9fcef2eb1a39ba996830fec5b081c5a9e8e4f6c132b0e1df9ffa89070c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb2b9fcef2eb1a39ba996830fec5b081c5a9e8e4f6c132b0e1df9ffa89070c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:31 compute-0 podman[97564]: 2025-11-25 20:07:31.344394974 +0000 UTC m=+0.160183314 container init 364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4 (image=quay.io/ceph/ceph:v18, name=jovial_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:07:31 compute-0 podman[97564]: 2025-11-25 20:07:31.356245184 +0000 UTC m=+0.172033504 container start 364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4 (image=quay.io/ceph/ceph:v18, name=jovial_hypatia, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:07:31 compute-0 podman[97564]: 2025-11-25 20:07:31.360336815 +0000 UTC m=+0.176125165 container attach 364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4 (image=quay.io/ceph/ceph:v18, name=jovial_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:31 compute-0 ceph-mon[75144]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 20:07:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828101118' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:07:31 compute-0 jovial_hypatia[97580]: 
Nov 25 20:07:31 compute-0 jovial_hypatia[97580]: {"fsid":"712dd110-763a-5547-8ef7-acda1414fdce","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":140,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":27,"num_osds":3,"num_up_osds":3,"osd_up_since":1764101231,"num_in_osds":3,"osd_in_since":1764101203,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83693568,"bytes_avail":64328232960,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T20:06:58.714937+0000","services":{}},"progress_events":{}}
Nov 25 20:07:32 compute-0 systemd[1]: libpod-364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4.scope: Deactivated successfully.
Nov 25 20:07:32 compute-0 podman[97564]: 2025-11-25 20:07:32.007067632 +0000 UTC m=+0.822855952 container died 364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4 (image=quay.io/ceph/ceph:v18, name=jovial_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb2b9fcef2eb1a39ba996830fec5b081c5a9e8e4f6c132b0e1df9ffa89070c9-merged.mount: Deactivated successfully.
Nov 25 20:07:32 compute-0 podman[97564]: 2025-11-25 20:07:32.068341916 +0000 UTC m=+0.884130236 container remove 364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4 (image=quay.io/ceph/ceph:v18, name=jovial_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:32 compute-0 systemd[1]: libpod-conmon-364cedd6a075d9c81fd5558f740380eac53ecfafb186dc63aac97b52643437d4.scope: Deactivated successfully.
Nov 25 20:07:32 compute-0 sudo[97560]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:32 compute-0 sudo[97641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deesuupeydujpkcdznrikenxbtfknrkq ; /usr/bin/python3'
Nov 25 20:07:32 compute-0 sudo[97641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:32 compute-0 python3[97643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:32 compute-0 podman[97644]: 2025-11-25 20:07:32.511750924 +0000 UTC m=+0.051767593 container create d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45 (image=quay.io/ceph/ceph:v18, name=musing_bardeen, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:07:32 compute-0 systemd[1]: Started libpod-conmon-d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45.scope.
Nov 25 20:07:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:32 compute-0 podman[97644]: 2025-11-25 20:07:32.49570329 +0000 UTC m=+0.035719939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b060d589e63f282e15e22cef974d4efe146a15bbe53c42ed8b63662744a18d0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b060d589e63f282e15e22cef974d4efe146a15bbe53c42ed8b63662744a18d0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:32 compute-0 podman[97644]: 2025-11-25 20:07:32.615274889 +0000 UTC m=+0.155291608 container init d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45 (image=quay.io/ceph/ceph:v18, name=musing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:07:32 compute-0 podman[97644]: 2025-11-25 20:07:32.626025038 +0000 UTC m=+0.166041707 container start d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45 (image=quay.io/ceph/ceph:v18, name=musing_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:32 compute-0 podman[97644]: 2025-11-25 20:07:32.629911913 +0000 UTC m=+0.169928622 container attach d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45 (image=quay.io/ceph/ceph:v18, name=musing_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:32 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2828101118' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:07:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 20:07:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102123881' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 20:07:33 compute-0 musing_bardeen[97659]: 
Nov 25 20:07:33 compute-0 musing_bardeen[97659]: {"epoch":1,"fsid":"712dd110-763a-5547-8ef7-acda1414fdce","modified":"2025-11-25T20:05:05.443078Z","created":"2025-11-25T20:05:05.443078Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 25 20:07:33 compute-0 musing_bardeen[97659]: dumped monmap epoch 1
Nov 25 20:07:33 compute-0 systemd[1]: libpod-d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45.scope: Deactivated successfully.
Nov 25 20:07:33 compute-0 podman[97644]: 2025-11-25 20:07:33.321225999 +0000 UTC m=+0.861242628 container died d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45 (image=quay.io/ceph/ceph:v18, name=musing_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b060d589e63f282e15e22cef974d4efe146a15bbe53c42ed8b63662744a18d0d-merged.mount: Deactivated successfully.
Nov 25 20:07:33 compute-0 podman[97644]: 2025-11-25 20:07:33.37901186 +0000 UTC m=+0.919028529 container remove d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45 (image=quay.io/ceph/ceph:v18, name=musing_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:07:33 compute-0 systemd[1]: libpod-conmon-d7f5e64f6fdb261d117940c2e7932d313aae3cf4a2b469de56531d17f71d9d45.scope: Deactivated successfully.
Nov 25 20:07:33 compute-0 sudo[97641]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:33 compute-0 sudo[97719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nshgqmujnhznxxomhfssecnuoejzngtk ; /usr/bin/python3'
Nov 25 20:07:33 compute-0 sudo[97719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:33 compute-0 ceph-mon[75144]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:33 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2102123881' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 20:07:33 compute-0 python3[97721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.085878687 +0000 UTC m=+0.081149623 container create f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8 (image=quay.io/ceph/ceph:v18, name=tender_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:07:34 compute-0 systemd[1]: Started libpod-conmon-f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8.scope.
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.052447627 +0000 UTC m=+0.047718633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408ce69ee2abed32613a729c4d892c876d8fee606d7dde1797ee892c7e1321ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408ce69ee2abed32613a729c4d892c876d8fee606d7dde1797ee892c7e1321ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.176753828 +0000 UTC m=+0.172024744 container init f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8 (image=quay.io/ceph/ceph:v18, name=tender_bardeen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.182810207 +0000 UTC m=+0.178081093 container start f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8 (image=quay.io/ceph/ceph:v18, name=tender_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.185911399 +0000 UTC m=+0.181182305 container attach f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8 (image=quay.io/ceph/ceph:v18, name=tender_bardeen, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:07:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 25 20:07:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3736100926' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 20:07:34 compute-0 tender_bardeen[97738]: [client.openstack]
Nov 25 20:07:34 compute-0 tender_bardeen[97738]:         key = AQDXCyZpAAAAABAA6kidp+XIon3+r0gcfgtA2g==
Nov 25 20:07:34 compute-0 tender_bardeen[97738]:         caps mgr = "allow *"
Nov 25 20:07:34 compute-0 tender_bardeen[97738]:         caps mon = "profile rbd"
Nov 25 20:07:34 compute-0 tender_bardeen[97738]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 25 20:07:34 compute-0 systemd[1]: libpod-f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8.scope: Deactivated successfully.
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.799737851 +0000 UTC m=+0.795008777 container died f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8 (image=quay.io/ceph/ceph:v18, name=tender_bardeen, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-408ce69ee2abed32613a729c4d892c876d8fee606d7dde1797ee892c7e1321ac-merged.mount: Deactivated successfully.
Nov 25 20:07:34 compute-0 podman[97722]: 2025-11-25 20:07:34.859082188 +0000 UTC m=+0.854353104 container remove f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8 (image=quay.io/ceph/ceph:v18, name=tender_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:34 compute-0 systemd[1]: libpod-conmon-f066c7672dd9f1fbb65794356cebc59d6fa9fbca3c6ddc1054fa502713f606e8.scope: Deactivated successfully.
Nov 25 20:07:34 compute-0 sudo[97719]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:34 compute-0 ceph-mon[75144]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:34 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3736100926' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 20:07:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:36 compute-0 sudo[97923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nolncewvgqvmbuasdaebwtpnfnaoqbbi ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101255.9160724-36784-126343990847305/async_wrapper.py j577345348833 30 /home/zuul/.ansible/tmp/ansible-tmp-1764101255.9160724-36784-126343990847305/AnsiballZ_command.py _'
Nov 25 20:07:36 compute-0 sudo[97923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:36 compute-0 ansible-async_wrapper.py[97925]: Invoked with j577345348833 30 /home/zuul/.ansible/tmp/ansible-tmp-1764101255.9160724-36784-126343990847305/AnsiballZ_command.py _
Nov 25 20:07:36 compute-0 ansible-async_wrapper.py[97928]: Starting module and watcher
Nov 25 20:07:36 compute-0 ansible-async_wrapper.py[97928]: Start watching 97929 (30)
Nov 25 20:07:36 compute-0 ansible-async_wrapper.py[97929]: Start module (97929)
Nov 25 20:07:36 compute-0 ansible-async_wrapper.py[97925]: Return async_wrapper task started.
Nov 25 20:07:36 compute-0 sudo[97923]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:36 compute-0 python3[97930]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:36 compute-0 podman[97931]: 2025-11-25 20:07:36.865375305 +0000 UTC m=+0.055129284 container create 9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee (image=quay.io/ceph/ceph:v18, name=upbeat_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:36 compute-0 systemd[1]: Started libpod-conmon-9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee.scope.
Nov 25 20:07:36 compute-0 podman[97931]: 2025-11-25 20:07:36.836330174 +0000 UTC m=+0.026084173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bd22ec6e5675aca6081d69028b7802d06469f65eccff2f67f68e579ed383cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bd22ec6e5675aca6081d69028b7802d06469f65eccff2f67f68e579ed383cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:36 compute-0 podman[97931]: 2025-11-25 20:07:36.966242275 +0000 UTC m=+0.155996274 container init 9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee (image=quay.io/ceph/ceph:v18, name=upbeat_gould, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:07:36 compute-0 podman[97931]: 2025-11-25 20:07:36.97738777 +0000 UTC m=+0.167141739 container start 9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee (image=quay.io/ceph/ceph:v18, name=upbeat_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:07:36 compute-0 podman[97931]: 2025-11-25 20:07:36.982889426 +0000 UTC m=+0.172643405 container attach 9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee (image=quay.io/ceph/ceph:v18, name=upbeat_gould, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:07:37 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:37 compute-0 upbeat_gould[97947]: 
Nov 25 20:07:37 compute-0 upbeat_gould[97947]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 20:07:37 compute-0 systemd[1]: libpod-9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee.scope: Deactivated successfully.
Nov 25 20:07:37 compute-0 podman[97972]: 2025-11-25 20:07:37.68045475 +0000 UTC m=+0.045080948 container died 9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee (image=quay.io/ceph/ceph:v18, name=upbeat_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6bd22ec6e5675aca6081d69028b7802d06469f65eccff2f67f68e579ed383cb-merged.mount: Deactivated successfully.
Nov 25 20:07:37 compute-0 podman[97972]: 2025-11-25 20:07:37.743908585 +0000 UTC m=+0.108534703 container remove 9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee (image=quay.io/ceph/ceph:v18, name=upbeat_gould, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:37 compute-0 systemd[1]: libpod-conmon-9fcd2c3e9219e626716d5b11a7338b0e854725efe7acba3e1f3e523d22ccacee.scope: Deactivated successfully.
Nov 25 20:07:37 compute-0 ceph-mon[75144]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:37 compute-0 ansible-async_wrapper.py[97929]: Module complete (97929)
Nov 25 20:07:37 compute-0 sudo[98031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akjswzoetnwigxdiupghjyvdxeocfndl ; /usr/bin/python3'
Nov 25 20:07:37 compute-0 sudo[98031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:38 compute-0 python3[98033]: ansible-ansible.legacy.async_status Invoked with jid=j577345348833.97925 mode=status _async_dir=/root/.ansible_async
Nov 25 20:07:38 compute-0 sudo[98031]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:38 compute-0 sudo[98080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvvmxxxfgywqrvshopmzgvkbzatpnhdw ; /usr/bin/python3'
Nov 25 20:07:38 compute-0 sudo[98080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:38 compute-0 python3[98082]: ansible-ansible.legacy.async_status Invoked with jid=j577345348833.97925 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 20:07:38 compute-0 sudo[98080]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:38 compute-0 ceph-mon[75144]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:38 compute-0 sudo[98106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungmolyxknqnmfyndjjetsosmyactipn ; /usr/bin/python3'
Nov 25 20:07:38 compute-0 sudo[98106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:39 compute-0 python3[98108]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.194963279 +0000 UTC m=+0.063586820 container create 8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364 (image=quay.io/ceph/ceph:v18, name=priceless_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:07:39 compute-0 systemd[1]: Started libpod-conmon-8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364.scope.
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.16599017 +0000 UTC m=+0.034613801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1c89823de07665d3d6491421aeddc62fc490d1a3ffdce893e6aef9cdd3e466/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1c89823de07665d3d6491421aeddc62fc490d1a3ffdce893e6aef9cdd3e466/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.295705454 +0000 UTC m=+0.164329005 container init 8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364 (image=quay.io/ceph/ceph:v18, name=priceless_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.307777574 +0000 UTC m=+0.176401125 container start 8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364 (image=quay.io/ceph/ceph:v18, name=priceless_noyce, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.311414422 +0000 UTC m=+0.180037993 container attach 8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364 (image=quay.io/ceph/ceph:v18, name=priceless_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:07:39 compute-0 ceph-mon[75144]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:39 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:39 compute-0 priceless_noyce[98124]: 
Nov 25 20:07:39 compute-0 priceless_noyce[98124]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 20:07:39 compute-0 systemd[1]: libpod-8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364.scope: Deactivated successfully.
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.908430136 +0000 UTC m=+0.777053667 container died 8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364 (image=quay.io/ceph/ceph:v18, name=priceless_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c1c89823de07665d3d6491421aeddc62fc490d1a3ffdce893e6aef9cdd3e466-merged.mount: Deactivated successfully.
Nov 25 20:07:39 compute-0 podman[98109]: 2025-11-25 20:07:39.994532191 +0000 UTC m=+0.863155712 container remove 8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364 (image=quay.io/ceph/ceph:v18, name=priceless_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:40 compute-0 systemd[1]: libpod-conmon-8ec714bef7782fa6b74a3485773fd963c787e3709940ca37ae115ac866258364.scope: Deactivated successfully.
Nov 25 20:07:40 compute-0 sudo[98106]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:40 compute-0 ceph-mon[75144]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:40 compute-0 sudo[98185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdgvdombebhmywwqpikkongqvxjkyhfh ; /usr/bin/python3'
Nov 25 20:07:40 compute-0 sudo[98185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:40 compute-0 python3[98187]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.051212373 +0000 UTC m=+0.054715615 container create 71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7 (image=quay.io/ceph/ceph:v18, name=modest_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 20:07:41 compute-0 systemd[1]: Started libpod-conmon-71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7.scope.
Nov 25 20:07:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.02699813 +0000 UTC m=+0.030501372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd03dbd1134cc0088c02839f66a064fc9780d2173791928d9e370fa06620167/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd03dbd1134cc0088c02839f66a064fc9780d2173791928d9e370fa06620167/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.139428385 +0000 UTC m=+0.142931587 container init 71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7 (image=quay.io/ceph/ceph:v18, name=modest_lamarr, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.1463685 +0000 UTC m=+0.149871732 container start 71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7 (image=quay.io/ceph/ceph:v18, name=modest_lamarr, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.150408107 +0000 UTC m=+0.153911319 container attach 71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7 (image=quay.io/ceph/ceph:v18, name=modest_lamarr, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:41 compute-0 ansible-async_wrapper.py[97928]: Done in kid B.
Nov 25 20:07:41 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:41 compute-0 modest_lamarr[98204]: 
Nov 25 20:07:41 compute-0 modest_lamarr[98204]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Nov 25 20:07:41 compute-0 systemd[1]: libpod-71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7.scope: Deactivated successfully.
Nov 25 20:07:41 compute-0 conmon[98204]: conmon 71dd3b5577a860424fb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7.scope/container/memory.events
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.727183083 +0000 UTC m=+0.730686305 container died 71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7 (image=quay.io/ceph/ceph:v18, name=modest_lamarr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebd03dbd1134cc0088c02839f66a064fc9780d2173791928d9e370fa06620167-merged.mount: Deactivated successfully.
Nov 25 20:07:41 compute-0 podman[98188]: 2025-11-25 20:07:41.786614541 +0000 UTC m=+0.790117763 container remove 71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7 (image=quay.io/ceph/ceph:v18, name=modest_lamarr, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:07:41 compute-0 systemd[1]: libpod-conmon-71dd3b5577a860424fb50ad5fd0c8b9c31d677207aa312ba2b2b919e37d8f6a7.scope: Deactivated successfully.
Nov 25 20:07:41 compute-0 sudo[98185]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:41 compute-0 ceph-mon[75144]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:42 compute-0 sudo[98262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqdotipheqseuhxmncrynicxquotkwf ; /usr/bin/python3'
Nov 25 20:07:42 compute-0 sudo[98262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:42 compute-0 python3[98264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:42 compute-0 ceph-mon[75144]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:42 compute-0 podman[98265]: 2025-11-25 20:07:42.886354226 +0000 UTC m=+0.042171741 container create f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab (image=quay.io/ceph/ceph:v18, name=keen_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:07:42 compute-0 systemd[1]: Started libpod-conmon-f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab.scope.
Nov 25 20:07:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132589374c2ab31c46d2c21b3098fc06aac9ce0d70ce7d275dd545a23a088fb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132589374c2ab31c46d2c21b3098fc06aac9ce0d70ce7d275dd545a23a088fb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:42 compute-0 podman[98265]: 2025-11-25 20:07:42.871719418 +0000 UTC m=+0.027536913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:42 compute-0 podman[98265]: 2025-11-25 20:07:42.979419707 +0000 UTC m=+0.135237232 container init f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab (image=quay.io/ceph/ceph:v18, name=keen_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:07:42 compute-0 podman[98265]: 2025-11-25 20:07:42.988293503 +0000 UTC m=+0.144111028 container start f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab (image=quay.io/ceph/ceph:v18, name=keen_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:42 compute-0 podman[98265]: 2025-11-25 20:07:42.991748514 +0000 UTC m=+0.147566049 container attach f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab (image=quay.io/ceph/ceph:v18, name=keen_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:07:43 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:43 compute-0 keen_cerf[98281]: 
Nov 25 20:07:43 compute-0 keen_cerf[98281]: [{"container_id": "04730c9747e1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.80%", "created": "2025-11-25T20:06:24.426862Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-25T20:06:24.489903Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T20:07:21.355415Z", "memory_usage": 11607736, "ports": [], "service_name": "crash", "started": "2025-11-25T20:06:24.266899Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-712dd110-763a-5547-8ef7-acda1414fdce@crash.compute-0", "version": "18.2.7"}, {"container_id": "b3ee4d5e0178", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "35.23%", "created": "2025-11-25T20:05:13.455956Z", "daemon_id": "compute-0.hdjasd", "daemon_name": "mgr.compute-0.hdjasd", "daemon_type": "mgr", "events": ["2025-11-25T20:06:30.476493Z daemon:mgr.compute-0.hdjasd [INFO] \"Reconfigured mgr.compute-0.hdjasd on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2025-11-25T20:07:21.355283Z", "memory_usage": 545783808, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-25T20:05:13.333390Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-712dd110-763a-5547-8ef7-acda1414fdce@mgr.compute-0.hdjasd", "version": "18.2.7"}, {"container_id": "3091c900b6c1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.29%", "created": "2025-11-25T20:05:07.767804Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-25T20:06:29.479056Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T20:07:21.355121Z", "memory_request": 2147483648, "memory_usage": 38356910, "ports": [], "service_name": "mon", "started": "2025-11-25T20:05:10.889529Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-712dd110-763a-5547-8ef7-acda1414fdce@mon.compute-0", "version": "18.2.7"}, {"container_id": "64635db2efad", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.82%", "created": "2025-11-25T20:06:54.530726Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-25T20:06:54.602936Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T20:07:21.355560Z", "memory_request": 4294967296, "memory_usage": 56549703, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T20:06:54.397960Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-712dd110-763a-5547-8ef7-acda1414fdce@osd.0", "version": "18.2.7"}, {"container_id": "6d26f06c851a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.05%", "created": "2025-11-25T20:06:59.880882Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-25T20:06:59.958280Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T20:07:21.355682Z", "memory_request": 4294967296, "memory_usage": 55941529, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T20:06:59.688457Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-712dd110-763a-5547-8ef7-acda1414fdce@osd.1", "version": "18.2.7"}, {"container_id": "6261bc1abd12", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.39%", "created": "2025-11-25T20:07:05.066034Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-25T20:07:05.140084Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T20:07:21.355830Z", "memory_request": 4294967296, "memory_usage": 58971914, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T20:07:04.952994Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-712dd110-763a-5547-8ef7-acda1414fdce@osd.2", "version": "18.2.7"}]
Nov 25 20:07:43 compute-0 systemd[1]: libpod-f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab.scope: Deactivated successfully.
Nov 25 20:07:43 compute-0 podman[98265]: 2025-11-25 20:07:43.543083045 +0000 UTC m=+0.698900590 container died f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab (image=quay.io/ceph/ceph:v18, name=keen_cerf, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-132589374c2ab31c46d2c21b3098fc06aac9ce0d70ce7d275dd545a23a088fb1-merged.mount: Deactivated successfully.
Nov 25 20:07:43 compute-0 podman[98265]: 2025-11-25 20:07:43.620489611 +0000 UTC m=+0.776307106 container remove f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab (image=quay.io/ceph/ceph:v18, name=keen_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:07:43 compute-0 systemd[1]: libpod-conmon-f300974fdd955cc6481013ede8bd48ee0435e05d78a6d8465d989acc124bc4ab.scope: Deactivated successfully.
Nov 25 20:07:43 compute-0 sudo[98262]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:43 compute-0 ceph-mon[75144]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:44 compute-0 sudo[98342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdquhcbzwvpdjojuhsogiikcpqwnfohn ; /usr/bin/python3'
Nov 25 20:07:44 compute-0 sudo[98342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:44 compute-0 python3[98344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:44 compute-0 podman[98345]: 2025-11-25 20:07:44.762864208 +0000 UTC m=+0.068348726 container create ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a (image=quay.io/ceph/ceph:v18, name=suspicious_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:44 compute-0 systemd[1]: Started libpod-conmon-ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a.scope.
Nov 25 20:07:44 compute-0 podman[98345]: 2025-11-25 20:07:44.734021502 +0000 UTC m=+0.039506060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:44 compute-0 ceph-mon[75144]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 20:07:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff942ac67badb3152dd345d04fe86d094c6566c4e9ee6954e77e2f11e0c002f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff942ac67badb3152dd345d04fe86d094c6566c4e9ee6954e77e2f11e0c002f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:44 compute-0 podman[98345]: 2025-11-25 20:07:44.878592571 +0000 UTC m=+0.184077129 container init ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a (image=quay.io/ceph/ceph:v18, name=suspicious_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:07:44 compute-0 podman[98345]: 2025-11-25 20:07:44.888119704 +0000 UTC m=+0.193604202 container start ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a (image=quay.io/ceph/ceph:v18, name=suspicious_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:07:44 compute-0 podman[98345]: 2025-11-25 20:07:44.891662468 +0000 UTC m=+0.197146986 container attach ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a (image=quay.io/ceph/ceph:v18, name=suspicious_moore, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:07:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 20:07:45 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621657441' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:07:45 compute-0 suspicious_moore[98361]: 
Nov 25 20:07:45 compute-0 suspicious_moore[98361]: {"fsid":"712dd110-763a-5547-8ef7-acda1414fdce","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":154,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":27,"num_osds":3,"num_up_osds":3,"osd_up_since":1764101231,"num_in_osds":3,"osd_in_since":1764101203,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83701760,"bytes_avail":64328224768,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T20:06:58.714937+0000","services":{}},"progress_events":{}}
Nov 25 20:07:45 compute-0 systemd[1]: libpod-ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a.scope: Deactivated successfully.
Nov 25 20:07:45 compute-0 podman[98345]: 2025-11-25 20:07:45.50707056 +0000 UTC m=+0.812555078 container died ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a (image=quay.io/ceph/ceph:v18, name=suspicious_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff942ac67badb3152dd345d04fe86d094c6566c4e9ee6954e77e2f11e0c002f1-merged.mount: Deactivated successfully.
Nov 25 20:07:45 compute-0 podman[98345]: 2025-11-25 20:07:45.567015383 +0000 UTC m=+0.872499901 container remove ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a (image=quay.io/ceph/ceph:v18, name=suspicious_moore, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:07:45 compute-0 systemd[1]: libpod-conmon-ff2e99252c03b36b4a98920bcb17c16709b51c3a9f501eadb661c0257059f71a.scope: Deactivated successfully.
Nov 25 20:07:45 compute-0 sudo[98342]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:45 compute-0 ceph-mon[75144]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:45 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/621657441' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 20:07:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:46 compute-0 sudo[98420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aavoyqlmpoyclhhvjzgbaetttqyrllpq ; /usr/bin/python3'
Nov 25 20:07:46 compute-0 sudo[98420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:46 compute-0 python3[98422]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:46 compute-0 podman[98423]: 2025-11-25 20:07:46.604192155 +0000 UTC m=+0.054931270 container create 9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f (image=quay.io/ceph/ceph:v18, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:46 compute-0 systemd[1]: Started libpod-conmon-9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f.scope.
Nov 25 20:07:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1ae92b6d1ee77d31e47fe196a3735d34287dd109823aad7edb9bcb9deb0d30/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1ae92b6d1ee77d31e47fe196a3735d34287dd109823aad7edb9bcb9deb0d30/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:46 compute-0 podman[98423]: 2025-11-25 20:07:46.580018083 +0000 UTC m=+0.030757258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:46 compute-0 podman[98423]: 2025-11-25 20:07:46.687772505 +0000 UTC m=+0.138511690 container init 9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f (image=quay.io/ceph/ceph:v18, name=great_bassi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:07:46 compute-0 podman[98423]: 2025-11-25 20:07:46.697535284 +0000 UTC m=+0.148274399 container start 9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f (image=quay.io/ceph/ceph:v18, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:07:46 compute-0 podman[98423]: 2025-11-25 20:07:46.70151976 +0000 UTC m=+0.152258905 container attach 9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f (image=quay.io/ceph/ceph:v18, name=great_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:07:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:47 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 20:07:47 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781834546' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:07:47 compute-0 great_bassi[98438]: 
Nov 25 20:07:47 compute-0 systemd[1]: libpod-9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f.scope: Deactivated successfully.
Nov 25 20:07:47 compute-0 conmon[98438]: conmon 9d9ae6db3c0a01e9f4f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f.scope/container/memory.events
Nov 25 20:07:47 compute-0 great_bassi[98438]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Nov 25 20:07:47 compute-0 podman[98423]: 2025-11-25 20:07:47.2588614 +0000 UTC m=+0.709600545 container died 9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f (image=quay.io/ceph/ceph:v18, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e1ae92b6d1ee77d31e47fe196a3735d34287dd109823aad7edb9bcb9deb0d30-merged.mount: Deactivated successfully.
Nov 25 20:07:47 compute-0 podman[98423]: 2025-11-25 20:07:47.306903986 +0000 UTC m=+0.757643091 container remove 9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f (image=quay.io/ceph/ceph:v18, name=great_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:47 compute-0 systemd[1]: libpod-conmon-9d9ae6db3c0a01e9f4f74a896704e967a3ffdf048c78bb7a5d239300f553d45f.scope: Deactivated successfully.
Nov 25 20:07:47 compute-0 sudo[98420]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:47 compute-0 ceph-mon[75144]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:47 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1781834546' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 20:07:48 compute-0 sudo[98500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwvxdhvjtwoorgwttymcecvvwidzjxxj ; /usr/bin/python3'
Nov 25 20:07:48 compute-0 sudo[98500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:48 compute-0 python3[98502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:48 compute-0 podman[98503]: 2025-11-25 20:07:48.396987193 +0000 UTC m=+0.065568322 container create 2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08 (image=quay.io/ceph/ceph:v18, name=mystifying_tharp, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:48 compute-0 systemd[1]: Started libpod-conmon-2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08.scope.
Nov 25 20:07:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddad163266ce9e83579bf26a092498364abb3e9589c9db41c75cc78e8a6aee79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddad163266ce9e83579bf26a092498364abb3e9589c9db41c75cc78e8a6aee79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:48 compute-0 podman[98503]: 2025-11-25 20:07:48.363904895 +0000 UTC m=+0.032486104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:48 compute-0 podman[98503]: 2025-11-25 20:07:48.464569198 +0000 UTC m=+0.133150407 container init 2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08 (image=quay.io/ceph/ceph:v18, name=mystifying_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:48 compute-0 podman[98503]: 2025-11-25 20:07:48.474179693 +0000 UTC m=+0.142760842 container start 2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08 (image=quay.io/ceph/ceph:v18, name=mystifying_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:07:48 compute-0 podman[98503]: 2025-11-25 20:07:48.478244121 +0000 UTC m=+0.146825290 container attach 2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08 (image=quay.io/ceph/ceph:v18, name=mystifying_tharp, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:07:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 25 20:07:48 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3056230463' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 20:07:48 compute-0 mystifying_tharp[98518]: mimic
Nov 25 20:07:49 compute-0 systemd[1]: libpod-2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08.scope: Deactivated successfully.
Nov 25 20:07:49 compute-0 podman[98503]: 2025-11-25 20:07:49.012960131 +0000 UTC m=+0.681541290 container died 2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08 (image=quay.io/ceph/ceph:v18, name=mystifying_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddad163266ce9e83579bf26a092498364abb3e9589c9db41c75cc78e8a6aee79-merged.mount: Deactivated successfully.
Nov 25 20:07:49 compute-0 podman[98503]: 2025-11-25 20:07:49.064824728 +0000 UTC m=+0.733405857 container remove 2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08 (image=quay.io/ceph/ceph:v18, name=mystifying_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:07:49 compute-0 systemd[1]: libpod-conmon-2805c4c0d349e2955948c744f66993951b1d6d920f16316d0294ac357de29d08.scope: Deactivated successfully.
Nov 25 20:07:49 compute-0 sudo[98500]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:49 compute-0 ceph-mon[75144]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:49 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3056230463' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 20:07:50 compute-0 sudo[98580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yogsqxcvjulgxlpqmqbilfzftisijbvv ; /usr/bin/python3'
Nov 25 20:07:50 compute-0 sudo[98580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:07:50 compute-0 python3[98582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 712dd110-763a-5547-8ef7-acda1414fdce -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:07:50 compute-0 podman[98583]: 2025-11-25 20:07:50.246929369 +0000 UTC m=+0.061739080 container create 9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a (image=quay.io/ceph/ceph:v18, name=trusting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:07:50 compute-0 systemd[1]: Started libpod-conmon-9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a.scope.
Nov 25 20:07:50 compute-0 podman[98583]: 2025-11-25 20:07:50.216736227 +0000 UTC m=+0.031546038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 20:07:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3d29597ac876040da44a466ef30c9d826ae45dd49f3066650b5f31580c54de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3d29597ac876040da44a466ef30c9d826ae45dd49f3066650b5f31580c54de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:50 compute-0 podman[98583]: 2025-11-25 20:07:50.33056412 +0000 UTC m=+0.145373861 container init 9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a (image=quay.io/ceph/ceph:v18, name=trusting_roentgen, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:07:50 compute-0 podman[98583]: 2025-11-25 20:07:50.338959863 +0000 UTC m=+0.153769594 container start 9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a (image=quay.io/ceph/ceph:v18, name=trusting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:07:50 compute-0 podman[98583]: 2025-11-25 20:07:50.342844016 +0000 UTC m=+0.157653777 container attach 9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a (image=quay.io/ceph/ceph:v18, name=trusting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:07:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 25 20:07:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3782126592' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 20:07:50 compute-0 trusting_roentgen[98598]: 
Nov 25 20:07:50 compute-0 trusting_roentgen[98598]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":5}}
Nov 25 20:07:50 compute-0 systemd[1]: libpod-9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a.scope: Deactivated successfully.
Nov 25 20:07:50 compute-0 podman[98583]: 2025-11-25 20:07:50.996370551 +0000 UTC m=+0.811180282 container died 9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a (image=quay.io/ceph/ceph:v18, name=trusting_roentgen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab3d29597ac876040da44a466ef30c9d826ae45dd49f3066650b5f31580c54de-merged.mount: Deactivated successfully.
Nov 25 20:07:51 compute-0 podman[98583]: 2025-11-25 20:07:51.050475588 +0000 UTC m=+0.865285309 container remove 9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a (image=quay.io/ceph/ceph:v18, name=trusting_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:07:51 compute-0 systemd[1]: libpod-conmon-9aa6a0eee50445d551318a777d6cfd16cb2e57f70453247ac89f0b5ce9e9536a.scope: Deactivated successfully.
Nov 25 20:07:51 compute-0 sudo[98580]: pam_unix(sudo:session): session closed for user root
Nov 25 20:07:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:51 compute-0 ceph-mon[75144]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:51 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3782126592' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 20:07:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:53 compute-0 ceph-mon[75144]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:55 compute-0 ceph-mon[75144]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:07:56
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 20:07:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 20:07:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:07:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 25 20:07:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 20:07:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 25 20:07:56 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev e57b0be3-1d29-4ea4-aabd-5f1fdc5e97e1 (PG autoscaler increasing pool 1 PGs from 1 to 32)
Nov 25 20:07:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 20:07:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 25 20:07:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 20:07:57 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 25 20:07:57 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev afb4576f-2b8d-4b3c-8814-51afcb242f6d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 20:07:57 compute-0 ceph-mon[75144]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:57 compute-0 ceph-mon[75144]: osdmap e28: 3 total, 3 up, 3 in
Nov 25 20:07:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 20:07:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:07:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:07:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:07:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 25 20:07:58 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev 454be7d5-62a8-4ebd-b932-d2bc2da6159b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 20:07:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 20:07:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:58 compute-0 ceph-mon[75144]: osdmap e29: 3 total, 3 up, 3 in
Nov 25 20:07:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:58 compute-0 ceph-mon[75144]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:07:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:07:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 30 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=14/14 les/c/f=15/15/0 sis=30 pruub=11.568552971s) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active pruub 69.792106628s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=14/14 les/c/f=15/15/0 sis=30 pruub=11.564014435s) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active pruub 69.788902283s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 30 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=14/14 les/c/f=15/15/0 sis=30 pruub=11.568552971s) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown pruub 69.792106628s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=14/14 les/c/f=15/15/0 sis=30 pruub=11.564014435s) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown pruub 69.788902283s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 25 20:07:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 20:07:59 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 25 20:07:59 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev c721b2c5-c720-470d-ab8e-88d7e71f377b (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 20:07:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 20:07:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1a( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.18( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.19( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.18( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.17( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:07:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-mon[75144]: osdmap e30: 3 total, 3 up, 3 in
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.16( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:07:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:07:59 compute-0 ceph-mon[75144]: osdmap e31: 3 total, 3 up, 3 in
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.14( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.15( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.17( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.14( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.13( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.16( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.13( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.11( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.12( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.10( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.10( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:07:59 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.12( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.f( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.d( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.d( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.e( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.f( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.c( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.c( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.b( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.2( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.2( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.4( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.6( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.3( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.3( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.4( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.6( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.7( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.5( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.7( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.9( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.8( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.a( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.b( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.a( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.5( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1b( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.19( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1e( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1c( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1f( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1c( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=14/15 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1d( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1f( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1a( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1e( empty local-lis/les=14/15 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.18( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.19( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.18( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.17( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.16( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.14( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.17( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.15( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.13( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.14( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.16( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.12( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.10( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.11( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.13( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.12( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.10( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.f( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.e( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.f( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.c( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.d( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.c( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.0( empty local-lis/les=30/31 n=0 ec=11/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=30/31 n=0 ec=13/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.d( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.2( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.2( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.6( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.b( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.3( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.4( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.4( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.3( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.6( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.7( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.5( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.7( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.9( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.a( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.8( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.b( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.19( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.a( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1e( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1b( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.5( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1f( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1c( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1d( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[3.1c( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1f( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 31 pg[1.1e( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=14/14 les/c/f=15/15/0 sis=30) [1] r=0 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 25 20:08:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:08:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 20:08:00 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 25 20:08:00 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev 89e6f44c-9485-4a4c-a0f8-e8acd42b45f1 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 25 20:08:00 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 32 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=32 pruub=11.183920860s) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active pruub 76.173652649s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 20:08:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:08:01 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 32 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=32 pruub=11.183920860s) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown pruub 76.173652649s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:08:01 compute-0 ceph-mon[75144]: pgmap v84: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:08:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:01 compute-0 ceph-mon[75144]: osdmap e32: 3 total, 3 up, 3 in
Nov 25 20:08:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:01 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 32 pg[5.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=32 pruub=14.990661621s) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active pruub 70.011985779s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:01 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 32 pg[5.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=32 pruub=14.990661621s) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown pruub 70.011985779s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:01 compute-0 ceph-mgr[75443]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 25 20:08:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 25 20:08:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:08:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 20:08:02 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] update: starting ev 1bf3c623-0e20-46f4-9f53-29a838fd5dcf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev e57b0be3-1d29-4ea4-aabd-5f1fdc5e97e1 (PG autoscaler increasing pool 1 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event e57b0be3-1d29-4ea4-aabd-5f1fdc5e97e1 (PG autoscaler increasing pool 1 PGs from 1 to 32) in 5 seconds
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev afb4576f-2b8d-4b3c-8814-51afcb242f6d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event afb4576f-2b8d-4b3c-8814-51afcb242f6d (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev 454be7d5-62a8-4ebd-b932-d2bc2da6159b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event 454be7d5-62a8-4ebd-b932-d2bc2da6159b (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev c721b2c5-c720-470d-ab8e-88d7e71f377b (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event c721b2c5-c720-470d-ab8e-88d7e71f377b (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev 89e6f44c-9485-4a4c-a0f8-e8acd42b45f1 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event 89e6f44c-9485-4a4c-a0f8-e8acd42b45f1 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] complete: finished ev 1bf3c623-0e20-46f4-9f53-29a838fd5dcf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event 1bf3c623-0e20-46f4-9f53-29a838fd5dcf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1f( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1c( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.6( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.8( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.b( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1b( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.a( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.d( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.e( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.10( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.17( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1e( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.b( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.c( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.16( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.17( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1f( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 20:08:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 20:08:02 compute-0 ceph-mon[75144]: osdmap e33: 3 total, 3 up, 3 in
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.6( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.b( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.8( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1b( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=32/33 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.10( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.17( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=32/33 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.17( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 33 pg[4.16( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [0] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:02 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 25 20:08:03 compute-0 ceph-mon[75144]: pgmap v87: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:03 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:03 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:03 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:03 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 20:08:03 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 25 20:08:03 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 25 20:08:03 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 25 20:08:03 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 34 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34 pruub=12.677519798s) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active pruub 80.288261414s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:03 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=14.707113266s) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active pruub 76.939926147s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:03 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 34 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34 pruub=12.677519798s) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown pruub 80.288261414s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:03 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=14.707113266s) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown pruub 76.939926147s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:03 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 20:08:03 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 20:08:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 25 20:08:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 20:08:04 compute-0 ceph-mon[75144]: osdmap e34: 3 total, 3 up, 3 in
Nov 25 20:08:04 compute-0 ceph-mon[75144]: 4.1 scrub starts
Nov 25 20:08:04 compute-0 ceph-mon[75144]: 4.1 scrub ok
Nov 25 20:08:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 20:08:04 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1c( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.11( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1f( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.12( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.15( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.17( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.16( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1a( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.d( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.7( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.15( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.14( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.5( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.17( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.16( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.11( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.c( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.10( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.13( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.12( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1a( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.d( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.c( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.19( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.f( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.2( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.3( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1b( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.e( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.6( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.b( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.18( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.7( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.19( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.8( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.4( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.9( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.5( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.a( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1e( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1f( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1c( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1d( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1a( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.12( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.11( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.15( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.17( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.d( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.0( empty local-lis/les=34/35 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.7( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.5( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.15( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.17( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.14( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.16( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.11( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.10( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.12( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=34/35 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.2( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.3( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.6( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.7( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.19( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.5( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.4( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.9( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.a( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.1d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.1a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.19( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.18( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 35 pg[6.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 35 pg[7.16( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:05 compute-0 ceph-mon[75144]: 5.1 scrub starts
Nov 25 20:08:05 compute-0 ceph-mon[75144]: 5.1 scrub ok
Nov 25 20:08:05 compute-0 ceph-mon[75144]: osdmap e35: 3 total, 3 up, 3 in
Nov 25 20:08:05 compute-0 ceph-mon[75144]: pgmap v90: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:05 compute-0 sshd-session[98635]: Accepted publickey for zuul from 192.168.122.30 port 45224 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:08:05 compute-0 systemd-logind[789]: New session 34 of user zuul.
Nov 25 20:08:05 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 25 20:08:05 compute-0 sshd-session[98635]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:08:06 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1 deep-scrub starts
Nov 25 20:08:06 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1 deep-scrub ok
Nov 25 20:08:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:06 compute-0 ceph-mgr[75443]: [progress INFO root] Writing back 9 completed events
Nov 25 20:08:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 20:08:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:06 compute-0 python3.9[98788]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:08:07 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.2 scrub starts
Nov 25 20:08:07 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.2 scrub ok
Nov 25 20:08:07 compute-0 ceph-mon[75144]: 1.1 deep-scrub starts
Nov 25 20:08:07 compute-0 ceph-mon[75144]: 1.1 deep-scrub ok
Nov 25 20:08:07 compute-0 ceph-mon[75144]: pgmap v91: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 20:08:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 20:08:08 compute-0 sudo[99004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ortczzuzuyaqpusrcucdltmbngeoohdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101288.1151593-32-95408589012594/AnsiballZ_command.py'
Nov 25 20:08:08 compute-0 sudo[99004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 20:08:08 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 25 20:08:08 compute-0 ceph-mon[75144]: 1.2 scrub starts
Nov 25 20:08:08 compute-0 ceph-mon[75144]: 1.2 scrub ok
Nov 25 20:08:08 compute-0 ceph-mon[75144]: 5.2 scrub starts
Nov 25 20:08:08 compute-0 ceph-mon[75144]: 5.2 scrub ok
Nov 25 20:08:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.236433029s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683654785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.236365318s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683654785s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.235626221s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.682960510s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.235555649s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.682960510s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169889450s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617309570s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169833183s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617309570s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.18( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169844627s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617424011s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.18( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169795990s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617424011s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.235258102s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683013916s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.17( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169669151s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617485046s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.235210419s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683013916s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.17( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169580460s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617485046s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169539452s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617485046s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169494629s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617485046s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169124603s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617301941s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169463158s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617721558s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169035912s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617301941s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.14( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169446945s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617736816s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169414520s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617721558s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.14( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169400215s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617736816s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.234521866s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683067322s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169303894s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617866516s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.15( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.234484673s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683067322s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169257164s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617866516s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.234435081s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683067322s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.15( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.234442711s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683067322s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169019699s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617874146s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.12( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169054031s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.617950439s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168986320s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617874146s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.11( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169105530s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618034363s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.11( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168722153s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618034363s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168644905s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618064880s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168610573s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618064880s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.12( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168416023s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.617950439s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.234975815s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.684593201s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.f( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168529510s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618179321s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.f( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168459892s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618179321s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.10( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168228149s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618003845s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168622971s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618446350s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.234801292s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.684593201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168122292s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618370056s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168578148s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618446350s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.10( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168159485s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618003845s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.d( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168096542s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618576050s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.232871056s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683433533s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.d( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.168041229s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618576050s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.232825279s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683433533s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.167708397s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618469238s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.167661667s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618469238s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.167613029s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618469238s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.232578278s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683456421s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.167566299s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618469238s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.232534409s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683456421s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.232422829s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683532715s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.232382774s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683532715s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.2( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.167373657s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.618568420s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.233401299s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683380127s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.2( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.167289734s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618568420s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.166978836s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618370056s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231767654s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683380127s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231887817s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683570862s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171710014s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.623428345s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231834412s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683570862s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171631813s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.623428345s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171859741s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.623809814s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171813011s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.623809814s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231513023s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683578491s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171837807s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.623931885s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231488228s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683586121s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231464386s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683578491s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171792984s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.623931885s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231446266s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683586121s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 python3.9[99006]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171515465s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.623962402s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171471596s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.623962402s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231060982s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683609009s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231014252s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683609009s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171265602s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.623947144s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231118202s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683845520s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.172306061s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.623817444s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171035767s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.623817444s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.231077194s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683845520s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171144485s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624076843s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.230647087s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683616638s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171092033s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624076843s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.230594635s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683670044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.5( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171841621s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624153137s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170835495s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.623947144s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.230554581s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683670044s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.5( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.171012878s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624153137s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.230617523s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683616638s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.230355263s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683792114s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1b( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170702934s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624153137s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.230306625s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683792114s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1b( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170657158s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624153137s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170581818s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624145508s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170550346s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624145508s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1c( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170526505s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624275208s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1c( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170483589s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624275208s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.229828835s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683753967s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.229783058s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683753967s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.229703903s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 78.683822632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.229662895s) [0] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.683822632s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170285225s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624267578s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169873238s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624267578s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169813156s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624290466s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169771194s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624290466s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1f( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169684410s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624328613s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[1.1f( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.169640541s) [0] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624328613s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.170786858s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 82.624107361s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=30/31 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.166658401s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.624107361s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.1b( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.1b( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.1f( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.f( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.4( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.2( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.c( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.1( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.18( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.9( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.6( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.1f( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.3( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.1c( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.15( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.6( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.11( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.3( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.f( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.a( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.9( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.17( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.13( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.17( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.15( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.11( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.12( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.10( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[1.12( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[7.1b( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[3.1f( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167648315s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018867493s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167623520s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018867493s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.11( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[1.14( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.16( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.212270737s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063644409s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.c( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.212222099s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063644409s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.212200165s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063674927s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.14( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.212185860s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063682556s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.14( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.212164879s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063682556s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.8( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.212147713s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063674927s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167126656s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018760681s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167110443s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018760681s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.e( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167030334s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018737793s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.2( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167000771s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018737793s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.167000771s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018768311s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.5( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.11( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211969376s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063743591s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.8( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166983604s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018768311s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.7( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.11( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211929321s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063743591s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166838646s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018714905s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166822433s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018714905s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211906433s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063842773s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211879730s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063842773s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[1.5( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166600227s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018707275s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.1( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166584015s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018707275s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166590691s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018722534s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166558266s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018722534s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211634636s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063835144s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166398048s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018623352s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166378021s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018623352s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211598396s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063835144s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211491585s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.063812256s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211465836s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.063812256s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211720467s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064147949s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211694717s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064147949s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211703300s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064262390s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166110992s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018684387s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.166069031s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018684387s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211659431s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064262390s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211444855s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064178467s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211427689s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064178467s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165834427s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018623352s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165721893s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018547058s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.5( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165803909s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018623352s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211330414s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064208984s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211313248s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064208984s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165670395s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018547058s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165492058s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018508911s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165477753s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018508911s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165368080s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018486023s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165322304s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018486023s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165233612s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018478394s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211939812s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.065200806s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165213585s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018478394s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.211908340s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.065200806s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165042877s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018394470s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.165016174s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018394470s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210865974s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064323425s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210841179s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064323425s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164860725s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018409729s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.1d( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164834023s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018409729s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210734367s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064392090s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210706711s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064392090s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164667130s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018440247s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164639473s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018440247s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155600548s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.009490967s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155573845s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.009490967s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210494041s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064460754s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164402008s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018440247s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210430145s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064460754s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164377213s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018440247s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164043427s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 82.018180847s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.164016724s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.018180847s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210309982s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064506531s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210261345s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064476013s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210241318s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064476013s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[1.f( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210275650s) [2] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064506531s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210186958s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064521790s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.1d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.210170746s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064521790s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.209750175s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active pruub 84.064239502s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36 pruub=11.209716797s) [1] r=-1 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.064239502s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.1a( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.1e( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.17( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.14( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[1.1( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.a( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.e( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[1.d( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[1.18( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[7.1c( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[3.18( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.12( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.10( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.f( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.c( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.d( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.18( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.2( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.1( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.4( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.9( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.145948410s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.716865540s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.145908356s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.716865540s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.156530380s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.727661133s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.156502724s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.727661133s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.15( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.b( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.5( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.144892693s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.716888428s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.144851685s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.716888428s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155567169s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.727767944s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155776978s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728027344s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155522346s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.727767944s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155750275s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728027344s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155660629s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728080750s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.4( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155641556s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728080750s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155561447s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728065491s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155522346s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728065491s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.7( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155501366s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728103638s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155605316s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728218079s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155582428s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728218079s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155467033s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728103638s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155413628s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728179932s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[4.8( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155427933s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728202820s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155395508s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728179932s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155414581s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728225708s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155406952s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728202820s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.1e( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155381203s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728225708s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155372620s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728279114s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155365944s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728302002s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155350685s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728279114s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155344009s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728302002s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.1c( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155329704s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728332520s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155316353s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728332520s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155216217s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728340149s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155106544s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728324890s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155111313s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728347778s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155173302s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728340149s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155080795s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728324890s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.1d( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155091286s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728347778s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155072212s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728424072s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155032158s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 71.728401184s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155051231s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728424072s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.155014992s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.728401184s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.14( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.13( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.11( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.11( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[6.6( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.13( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.e( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.f( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.1d( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.9( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.18( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.1( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.1a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.c( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.f( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.11( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.12( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.13( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.19( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 36 pg[5.16( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.1( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.1a( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.8( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.1e( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.1b( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.a( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.5( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[4.1c( empty local-lis/les=0/0 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 36 pg[6.1f( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.4( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.3( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.2( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.7( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.14( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 36 pg[5.15( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:08 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 20:08:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 25 20:08:09 compute-0 ceph-mon[75144]: pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 20:08:09 compute-0 ceph-mon[75144]: osdmap e36: 3 total, 3 up, 3 in
Nov 25 20:08:09 compute-0 ceph-mon[75144]: 5.6 scrub starts
Nov 25 20:08:09 compute-0 ceph-mon[75144]: 5.6 scrub ok
Nov 25 20:08:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 20:08:09 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.15( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.11( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.1f( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.12( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.14( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.10( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.12( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.15( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.11( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.15( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.17( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.13( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.17( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.9( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.a( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.f( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.3( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.6( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.11( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.14( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.15( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.11( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.13( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[1.14( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.11( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.16( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.c( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.8( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.f( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.1( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.e( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.5( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.2( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[1.5( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.7( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.1( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.8( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.5( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.8( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.1d( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[1.f( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[1.1( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.1a( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.e( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.a( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.1e( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[1.d( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[6.1f( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[4.1c( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [2] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[7.1c( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [2] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[1.18( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 37 pg[3.18( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [2] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.1b( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.1c( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.3( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.1f( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.4( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.6( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.9( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.18( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.7( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.1( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.c( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.2( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.4( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.f( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[7.1f( empty local-lis/les=36/37 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[3.1b( empty local-lis/les=36/37 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[1.1b( empty local-lis/les=36/37 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=36) [0] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 37 pg[5.2( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.1d( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.1e( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.12( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.1d( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.13( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.12( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.10( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.17( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.f( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.c( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.7( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.1( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.6( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.1( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.2( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.b( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.d( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.e( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.f( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.1c( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.18( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.19( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.c( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[5.1a( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:09 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 37 pg[6.4( empty local-lis/les=36/37 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:10 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 20:08:10 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 20:08:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:10 compute-0 ceph-mon[75144]: osdmap e37: 3 total, 3 up, 3 in
Nov 25 20:08:10 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 25 20:08:10 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 25 20:08:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:11 compute-0 ceph-mgr[75443]: [progress INFO root] Completed event 33052122-d6ed-47d2-ab66-122b2d2c63a1 (Global Recovery Event) in 10 seconds
Nov 25 20:08:11 compute-0 ceph-mon[75144]: 4.3 scrub starts
Nov 25 20:08:11 compute-0 ceph-mon[75144]: 4.3 scrub ok
Nov 25 20:08:11 compute-0 ceph-mon[75144]: pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:11 compute-0 ceph-mon[75144]: 5.8 scrub starts
Nov 25 20:08:11 compute-0 ceph-mon[75144]: 5.8 scrub ok
Nov 25 20:08:11 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 20:08:11 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 20:08:12 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 25 20:08:12 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 25 20:08:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:12 compute-0 ceph-mon[75144]: 5.a scrub starts
Nov 25 20:08:12 compute-0 ceph-mon[75144]: 5.a scrub ok
Nov 25 20:08:12 compute-0 ceph-mon[75144]: 4.6 scrub starts
Nov 25 20:08:12 compute-0 ceph-mon[75144]: 4.6 scrub ok
Nov 25 20:08:12 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 20:08:12 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 20:08:13 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 25 20:08:13 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 25 20:08:13 compute-0 ceph-mon[75144]: pgmap v96: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:13 compute-0 ceph-mon[75144]: 5.b scrub starts
Nov 25 20:08:13 compute-0 ceph-mon[75144]: 5.b scrub ok
Nov 25 20:08:13 compute-0 ceph-mon[75144]: 4.b scrub starts
Nov 25 20:08:13 compute-0 ceph-mon[75144]: 4.b scrub ok
Nov 25 20:08:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:15 compute-0 sudo[99004]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:15 compute-0 ceph-mon[75144]: pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:15 compute-0 sshd-session[98638]: Connection closed by 192.168.122.30 port 45224
Nov 25 20:08:15 compute-0 sshd-session[98635]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:08:15 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 25 20:08:15 compute-0 systemd[1]: session-34.scope: Consumed 8.725s CPU time.
Nov 25 20:08:15 compute-0 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Nov 25 20:08:15 compute-0 systemd-logind[789]: Removed session 34.
Nov 25 20:08:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:16 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 20:08:16 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 20:08:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:16 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:08:16 compute-0 ceph-mgr[75443]: [progress INFO root] Writing back 10 completed events
Nov 25 20:08:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 20:08:16 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:16 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.3 scrub starts
Nov 25 20:08:16 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.3 scrub ok
Nov 25 20:08:17 compute-0 ceph-mon[75144]: 4.c scrub starts
Nov 25 20:08:17 compute-0 ceph-mon[75144]: 4.c scrub ok
Nov 25 20:08:17 compute-0 ceph-mon[75144]: pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:18 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 25 20:08:18 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 25 20:08:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:18 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 20:08:18 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 20:08:18 compute-0 ceph-mon[75144]: 1.3 scrub starts
Nov 25 20:08:18 compute-0 ceph-mon[75144]: 1.3 scrub ok
Nov 25 20:08:19 compute-0 ceph-mon[75144]: 4.15 scrub starts
Nov 25 20:08:19 compute-0 ceph-mon[75144]: 4.15 scrub ok
Nov 25 20:08:19 compute-0 ceph-mon[75144]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:19 compute-0 ceph-mon[75144]: 5.d scrub starts
Nov 25 20:08:19 compute-0 ceph-mon[75144]: 5.d scrub ok
Nov 25 20:08:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:21 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 20:08:21 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 20:08:21 compute-0 ceph-mon[75144]: pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:21 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.4 scrub starts
Nov 25 20:08:21 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.4 scrub ok
Nov 25 20:08:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:22 compute-0 ceph-mon[75144]: 5.e scrub starts
Nov 25 20:08:22 compute-0 ceph-mon[75144]: 5.e scrub ok
Nov 25 20:08:23 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 25 20:08:23 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 25 20:08:23 compute-0 ceph-mon[75144]: 1.4 scrub starts
Nov 25 20:08:23 compute-0 ceph-mon[75144]: 1.4 scrub ok
Nov 25 20:08:23 compute-0 ceph-mon[75144]: pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:23 compute-0 ceph-mon[75144]: 4.16 scrub starts
Nov 25 20:08:23 compute-0 ceph-mon[75144]: 4.16 scrub ok
Nov 25 20:08:23 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 25 20:08:23 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 25 20:08:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:24 compute-0 ceph-mon[75144]: 3.2 scrub starts
Nov 25 20:08:24 compute-0 ceph-mon[75144]: 3.2 scrub ok
Nov 25 20:08:25 compute-0 ceph-mon[75144]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:08:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:08:27 compute-0 ceph-mon[75144]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:28 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.6 scrub starts
Nov 25 20:08:28 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.6 scrub ok
Nov 25 20:08:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:28 compute-0 ceph-mon[75144]: 1.6 scrub starts
Nov 25 20:08:28 compute-0 ceph-mon[75144]: 1.6 scrub ok
Nov 25 20:08:29 compute-0 sudo[99064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:29 compute-0 sudo[99064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:29 compute-0 sudo[99064]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:29 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 20:08:29 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 20:08:29 compute-0 sudo[99089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:08:29 compute-0 sudo[99089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:29 compute-0 sudo[99089]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:29 compute-0 sudo[99114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:29 compute-0 sudo[99114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:29 compute-0 sudo[99114]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:29 compute-0 sudo[99139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:08:29 compute-0 sudo[99139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:29 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 25 20:08:29 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 25 20:08:29 compute-0 ceph-mon[75144]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:29 compute-0 ceph-mon[75144]: 4.17 scrub starts
Nov 25 20:08:29 compute-0 ceph-mon[75144]: 4.17 scrub ok
Nov 25 20:08:30 compute-0 podman[99236]: 2025-11-25 20:08:30.26482551 +0000 UTC m=+0.087269179 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:08:30 compute-0 podman[99236]: 2025-11-25 20:08:30.366561022 +0000 UTC m=+0.189004701 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:08:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:30 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 20:08:30 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 20:08:30 compute-0 ceph-mon[75144]: 5.10 scrub starts
Nov 25 20:08:30 compute-0 ceph-mon[75144]: 5.10 scrub ok
Nov 25 20:08:30 compute-0 sudo[99139]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:08:30 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:31 compute-0 sudo[99355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:31 compute-0 sudo[99355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:31 compute-0 sudo[99355]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:31 compute-0 sudo[99380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:08:31 compute-0 sudo[99380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:31 compute-0 sudo[99380]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:31 compute-0 sudo[99405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:31 compute-0 sudo[99405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:31 compute-0 sudo[99405]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:31 compute-0 sudo[99430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:08:31 compute-0 sudo[99430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:31 compute-0 sshd-session[99462]: Accepted publickey for zuul from 192.168.122.30 port 52656 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:08:31 compute-0 systemd-logind[789]: New session 35 of user zuul.
Nov 25 20:08:31 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 25 20:08:31 compute-0 sshd-session[99462]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:08:31 compute-0 sudo[99430]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:31 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 20:08:31 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:08:31 compute-0 ceph-mon[75144]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:31 compute-0 ceph-mon[75144]: 5.17 scrub starts
Nov 25 20:08:31 compute-0 ceph-mon[75144]: 5.17 scrub ok
Nov 25 20:08:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 2d786674-8eb0-4d80-9beb-c370234be372 does not exist
Nov 25 20:08:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev b87ad822-5e7e-40fb-9f68-c8532bacb51c does not exist
Nov 25 20:08:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 08b98672-7ab2-4def-957f-16eba0d441b6 does not exist
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:08:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:08:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:08:32 compute-0 sudo[99543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:32 compute-0 sudo[99543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:32 compute-0 sudo[99543]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:32 compute-0 sudo[99590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:08:32 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.7 deep-scrub starts
Nov 25 20:08:32 compute-0 sudo[99590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:32 compute-0 sudo[99590]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:32 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.7 deep-scrub ok
Nov 25 20:08:32 compute-0 sudo[99617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:32 compute-0 sudo[99617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:32 compute-0 sudo[99617]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:32 compute-0 sudo[99665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:08:32 compute-0 sudo[99665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:32 compute-0 python3.9[99740]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.685752208 +0000 UTC m=+0.052621237 container create aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:08:32 compute-0 systemd[76777]: Starting Mark boot as successful...
Nov 25 20:08:32 compute-0 systemd[76777]: Finished Mark boot as successful.
Nov 25 20:08:32 compute-0 systemd[1]: Started libpod-conmon-aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d.scope.
Nov 25 20:08:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.654890329 +0000 UTC m=+0.021759458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:08:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.759562619 +0000 UTC m=+0.126431648 container init aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.770575642 +0000 UTC m=+0.137444671 container start aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.774449894 +0000 UTC m=+0.141318973 container attach aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:08:32 compute-0 clever_hellman[99844]: 167 167
Nov 25 20:08:32 compute-0 systemd[1]: libpod-aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d.scope: Deactivated successfully.
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.777573737 +0000 UTC m=+0.144442766 container died aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b094dc3b85d43298afd962f77ed4e91096e5afc486541aac77d579baba0290a-merged.mount: Deactivated successfully.
Nov 25 20:08:32 compute-0 podman[99803]: 2025-11-25 20:08:32.815609227 +0000 UTC m=+0.182478276 container remove aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:08:32 compute-0 systemd[1]: libpod-conmon-aec7dcdb2c27609f570453651343649043bb9543a39931c22d914d526402f69d.scope: Deactivated successfully.
Nov 25 20:08:32 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 20:08:32 compute-0 ceph-mon[75144]: 5.1b scrub starts
Nov 25 20:08:32 compute-0 ceph-mon[75144]: 5.1b scrub ok
Nov 25 20:08:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:08:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:08:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:08:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:08:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:08:32 compute-0 ceph-mon[75144]: 1.7 deep-scrub starts
Nov 25 20:08:32 compute-0 ceph-mon[75144]: 1.7 deep-scrub ok
Nov 25 20:08:32 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 20:08:32 compute-0 podman[99896]: 2025-11-25 20:08:32.970366687 +0000 UTC m=+0.039320965 container create 24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:08:33 compute-0 systemd[1]: Started libpod-conmon-24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2.scope.
Nov 25 20:08:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb6d1ff48a445d50bcad8569651f3f7e2085aea0ac045165a44ec50e195819f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb6d1ff48a445d50bcad8569651f3f7e2085aea0ac045165a44ec50e195819f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb6d1ff48a445d50bcad8569651f3f7e2085aea0ac045165a44ec50e195819f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb6d1ff48a445d50bcad8569651f3f7e2085aea0ac045165a44ec50e195819f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb6d1ff48a445d50bcad8569651f3f7e2085aea0ac045165a44ec50e195819f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:33 compute-0 podman[99896]: 2025-11-25 20:08:33.050410473 +0000 UTC m=+0.119364791 container init 24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:08:33 compute-0 podman[99896]: 2025-11-25 20:08:32.955687167 +0000 UTC m=+0.024641465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:08:33 compute-0 podman[99896]: 2025-11-25 20:08:33.058193729 +0000 UTC m=+0.127148007 container start 24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:08:33 compute-0 podman[99896]: 2025-11-25 20:08:33.061360253 +0000 UTC m=+0.130314592 container attach 24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:08:33 compute-0 python3.9[100015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:08:33 compute-0 ceph-mon[75144]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:33 compute-0 ceph-mon[75144]: 5.1c scrub starts
Nov 25 20:08:33 compute-0 ceph-mon[75144]: 5.1c scrub ok
Nov 25 20:08:34 compute-0 gallant_buck[99913]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:08:34 compute-0 gallant_buck[99913]: --> relative data size: 1.0
Nov 25 20:08:34 compute-0 gallant_buck[99913]: --> All data devices are unavailable
Nov 25 20:08:34 compute-0 systemd[1]: libpod-24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2.scope: Deactivated successfully.
Nov 25 20:08:34 compute-0 systemd[1]: libpod-24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2.scope: Consumed 1.142s CPU time.
Nov 25 20:08:34 compute-0 podman[99896]: 2025-11-25 20:08:34.261443572 +0000 UTC m=+1.330397890 container died 24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddb6d1ff48a445d50bcad8569651f3f7e2085aea0ac045165a44ec50e195819f-merged.mount: Deactivated successfully.
Nov 25 20:08:34 compute-0 podman[99896]: 2025-11-25 20:08:34.323458099 +0000 UTC m=+1.392412377 container remove 24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:08:34 compute-0 systemd[1]: libpod-conmon-24a418abd07a4bcfc4462dff7193abe8bbde41322010529ac0af1130bffbfcb2.scope: Deactivated successfully.
Nov 25 20:08:34 compute-0 sudo[99665]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:34 compute-0 sudo[100129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:34 compute-0 sudo[100129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:34 compute-0 sudo[100129]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:34 compute-0 sudo[100160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:08:34 compute-0 sudo[100160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:34 compute-0 sudo[100160]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:34 compute-0 sudo[100185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:34 compute-0 sudo[100185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:34 compute-0 sudo[100185]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:34 compute-0 sudo[100216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:08:34 compute-0 sudo[100216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:34 compute-0 sudo[100320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfeqopkombpjbcxexolqzksxkbcmvaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101314.297921-45-112909994038683/AnsiballZ_command.py'
Nov 25 20:08:34 compute-0 sudo[100320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:35 compute-0 python3.9[100324]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.066255514 +0000 UTC m=+0.063733383 container create 4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:08:35 compute-0 sudo[100320]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:35 compute-0 systemd[1]: Started libpod-conmon-4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31.scope.
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.033490734 +0000 UTC m=+0.030968703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:08:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.172214748 +0000 UTC m=+0.169692637 container init 4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.186058355 +0000 UTC m=+0.183536224 container start 4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.189221339 +0000 UTC m=+0.186699218 container attach 4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:08:35 compute-0 great_allen[100368]: 167 167
Nov 25 20:08:35 compute-0 systemd[1]: libpod-4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31.scope: Deactivated successfully.
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.193249597 +0000 UTC m=+0.190727476 container died 4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_allen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5bbd632c7742ff9e85550d8afececbf5d9e1d52dbffec33c531903df1a9c4e3-merged.mount: Deactivated successfully.
Nov 25 20:08:35 compute-0 podman[100350]: 2025-11-25 20:08:35.247888747 +0000 UTC m=+0.245366616 container remove 4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:08:35 compute-0 systemd[1]: libpod-conmon-4e389eb8efc9f5c04f5b3f3052b16426ad2340a6036d153212ee94cf5df8fc31.scope: Deactivated successfully.
Nov 25 20:08:35 compute-0 podman[100417]: 2025-11-25 20:08:35.474371242 +0000 UTC m=+0.067942725 container create a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:08:35 compute-0 systemd[1]: Started libpod-conmon-a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f.scope.
Nov 25 20:08:35 compute-0 podman[100417]: 2025-11-25 20:08:35.446558513 +0000 UTC m=+0.040130056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:08:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53729b03e363b94b6beb960436ae64537349cd3bbe85f0ce8ad36b78378bd5f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53729b03e363b94b6beb960436ae64537349cd3bbe85f0ce8ad36b78378bd5f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53729b03e363b94b6beb960436ae64537349cd3bbe85f0ce8ad36b78378bd5f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53729b03e363b94b6beb960436ae64537349cd3bbe85f0ce8ad36b78378bd5f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:35 compute-0 podman[100417]: 2025-11-25 20:08:35.582143993 +0000 UTC m=+0.175715536 container init a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:08:35 compute-0 podman[100417]: 2025-11-25 20:08:35.593194597 +0000 UTC m=+0.186766080 container start a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 20:08:35 compute-0 podman[100417]: 2025-11-25 20:08:35.5970638 +0000 UTC m=+0.190635283 container attach a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:08:35 compute-0 ceph-mon[75144]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:35 compute-0 sudo[100565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgudwlyumckjihgfzxmhfgecglxnizr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101315.481064-57-158912415448062/AnsiballZ_stat.py'
Nov 25 20:08:35 compute-0 sudo[100565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:36 compute-0 python3.9[100567]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:08:36 compute-0 sudo[100565]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:36 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 25 20:08:36 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]: {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:     "0": [
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:         {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "devices": [
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "/dev/loop3"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             ],
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_name": "ceph_lv0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_size": "21470642176",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "name": "ceph_lv0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "tags": {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cluster_name": "ceph",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.crush_device_class": "",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.encrypted": "0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osd_id": "0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.type": "block",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.vdo": "0"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             },
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "type": "block",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "vg_name": "ceph_vg0"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:         }
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:     ],
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:     "1": [
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:         {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "devices": [
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "/dev/loop4"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             ],
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_name": "ceph_lv1",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_size": "21470642176",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "name": "ceph_lv1",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "tags": {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cluster_name": "ceph",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.crush_device_class": "",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.encrypted": "0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osd_id": "1",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.type": "block",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.vdo": "0"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             },
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "type": "block",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "vg_name": "ceph_vg1"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:         }
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:     ],
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:     "2": [
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:         {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "devices": [
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "/dev/loop5"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             ],
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_name": "ceph_lv2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_size": "21470642176",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "name": "ceph_lv2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "tags": {
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.cluster_name": "ceph",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.crush_device_class": "",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.encrypted": "0",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osd_id": "2",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.type": "block",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:                 "ceph.vdo": "0"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             },
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "type": "block",
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:             "vg_name": "ceph_vg2"
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:         }
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]:     ]
Nov 25 20:08:36 compute-0 dreamy_elbakyan[100466]: }
Nov 25 20:08:36 compute-0 systemd[1]: libpod-a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f.scope: Deactivated successfully.
Nov 25 20:08:36 compute-0 podman[100417]: 2025-11-25 20:08:36.431073377 +0000 UTC m=+1.024644890 container died a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-53729b03e363b94b6beb960436ae64537349cd3bbe85f0ce8ad36b78378bd5f5-merged.mount: Deactivated successfully.
Nov 25 20:08:36 compute-0 podman[100417]: 2025-11-25 20:08:36.493988488 +0000 UTC m=+1.087559941 container remove a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:08:36 compute-0 systemd[1]: libpod-conmon-a6b61ee1a1370404649fd4d8cbb468e7248ed691420e1f6bfab7208ef467ac2f.scope: Deactivated successfully.
Nov 25 20:08:36 compute-0 sudo[100216]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:36 compute-0 sudo[100640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:36 compute-0 sudo[100640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:36 compute-0 sudo[100640]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:36 compute-0 sudo[100687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:08:36 compute-0 sudo[100687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:36 compute-0 sudo[100687]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:36 compute-0 sudo[100712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:36 compute-0 sudo[100712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:36 compute-0 sudo[100712]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:36 compute-0 sudo[100737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:08:36 compute-0 sudo[100737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:36 compute-0 ceph-mon[75144]: 4.19 scrub starts
Nov 25 20:08:36 compute-0 ceph-mon[75144]: 4.19 scrub ok
Nov 25 20:08:36 compute-0 ceph-mon[75144]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:37 compute-0 sudo[100842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brhjqlnieclndocdiwbpsqrmbkjasqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101316.5046916-68-173900633563355/AnsiballZ_file.py'
Nov 25 20:08:37 compute-0 sudo[100842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:37 compute-0 python3.9[100850]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:08:37 compute-0 sudo[100842]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.285005894 +0000 UTC m=+0.076854032 container create 5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:08:37 compute-0 systemd[1]: Started libpod-conmon-5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d.scope.
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.254470723 +0000 UTC m=+0.046318871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:08:37 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 25 20:08:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:08:37 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.391562763 +0000 UTC m=+0.183410881 container init 5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.403577683 +0000 UTC m=+0.195425791 container start 5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.407336932 +0000 UTC m=+0.199185080 container attach 5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:08:37 compute-0 determined_pascal[100917]: 167 167
Nov 25 20:08:37 compute-0 systemd[1]: libpod-5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d.scope: Deactivated successfully.
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.410916577 +0000 UTC m=+0.202764725 container died 5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8511300fdb52b2b3e1f12931a1c9dfe4917792dd06be656df495e27ef3d721-merged.mount: Deactivated successfully.
Nov 25 20:08:37 compute-0 podman[100878]: 2025-11-25 20:08:37.464147351 +0000 UTC m=+0.255995459 container remove 5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:08:37 compute-0 systemd[1]: libpod-conmon-5782ad6843d63a36bab8ab40ea2c088b92e7af7fa71a60e48d1a49a9b9854c8d.scope: Deactivated successfully.
Nov 25 20:08:37 compute-0 podman[100995]: 2025-11-25 20:08:37.69042615 +0000 UTC m=+0.054314323 container create 1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:08:37 compute-0 systemd[1]: Started libpod-conmon-1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4.scope.
Nov 25 20:08:37 compute-0 podman[100995]: 2025-11-25 20:08:37.671751804 +0000 UTC m=+0.035639977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:08:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f899b543529c6d14a12d30a4c46613e8fd971b14b760abb37caf462823bcdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f899b543529c6d14a12d30a4c46613e8fd971b14b760abb37caf462823bcdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f899b543529c6d14a12d30a4c46613e8fd971b14b760abb37caf462823bcdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f899b543529c6d14a12d30a4c46613e8fd971b14b760abb37caf462823bcdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:37 compute-0 podman[100995]: 2025-11-25 20:08:37.803204195 +0000 UTC m=+0.167092408 container init 1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:08:37 compute-0 podman[100995]: 2025-11-25 20:08:37.825840116 +0000 UTC m=+0.189728309 container start 1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:08:37 compute-0 podman[100995]: 2025-11-25 20:08:37.833380596 +0000 UTC m=+0.197268839 container attach 1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:08:37 compute-0 sudo[101090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnwpydhrcphuobazpumzujhrrckkwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101317.51684-77-7769060038925/AnsiballZ_file.py'
Nov 25 20:08:37 compute-0 sudo[101090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:38 compute-0 ceph-mon[75144]: 4.1d scrub starts
Nov 25 20:08:38 compute-0 ceph-mon[75144]: 4.1d scrub ok
Nov 25 20:08:38 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 25 20:08:38 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 25 20:08:38 compute-0 python3.9[101092]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:08:38 compute-0 sudo[101090]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]: {
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "osd_id": 2,
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "type": "bluestore"
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:     },
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "osd_id": 1,
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "type": "bluestore"
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:     },
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "osd_id": 0,
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:         "type": "bluestore"
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]:     }
Nov 25 20:08:38 compute-0 exciting_cartwright[101036]: }
Nov 25 20:08:38 compute-0 systemd[1]: libpod-1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4.scope: Deactivated successfully.
Nov 25 20:08:38 compute-0 systemd[1]: libpod-1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4.scope: Consumed 1.090s CPU time.
Nov 25 20:08:38 compute-0 podman[100995]: 2025-11-25 20:08:38.911964988 +0000 UTC m=+1.275853161 container died 1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f899b543529c6d14a12d30a4c46613e8fd971b14b760abb37caf462823bcdd-merged.mount: Deactivated successfully.
Nov 25 20:08:38 compute-0 podman[100995]: 2025-11-25 20:08:38.983151868 +0000 UTC m=+1.347040031 container remove 1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:08:38 compute-0 systemd[1]: libpod-conmon-1c15c9e803fa9d4f1114185ee70fdee654a275f4aafe490ca1472f929268c0e4.scope: Deactivated successfully.
Nov 25 20:08:39 compute-0 ceph-mon[75144]: 5.1f scrub starts
Nov 25 20:08:39 compute-0 ceph-mon[75144]: 5.1f scrub ok
Nov 25 20:08:39 compute-0 ceph-mon[75144]: pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:39 compute-0 sudo[100737]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:08:39 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:08:39 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:39 compute-0 python3.9[101262]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:08:39 compute-0 network[101324]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:08:39 compute-0 sudo[101284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:08:39 compute-0 network[101325]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:08:39 compute-0 sudo[101284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:39 compute-0 network[101326]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:08:39 compute-0 sudo[101284]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:39 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 20:08:39 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 20:08:39 compute-0 sudo[101331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:08:39 compute-0 sudo[101331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:08:39 compute-0 sudo[101331]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:39 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 20:08:39 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 20:08:40 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:40 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:08:40 compute-0 ceph-mon[75144]: 3.4 scrub starts
Nov 25 20:08:40 compute-0 ceph-mon[75144]: 3.4 scrub ok
Nov 25 20:08:40 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.8 scrub starts
Nov 25 20:08:40 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.8 scrub ok
Nov 25 20:08:40 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 20:08:40 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 20:08:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:41 compute-0 ceph-mon[75144]: 3.11 scrub starts
Nov 25 20:08:41 compute-0 ceph-mon[75144]: 3.11 scrub ok
Nov 25 20:08:41 compute-0 ceph-mon[75144]: 1.8 scrub starts
Nov 25 20:08:41 compute-0 ceph-mon[75144]: 1.8 scrub ok
Nov 25 20:08:41 compute-0 ceph-mon[75144]: 4.1e scrub starts
Nov 25 20:08:41 compute-0 ceph-mon[75144]: 4.1e scrub ok
Nov 25 20:08:41 compute-0 ceph-mon[75144]: pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:42 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 20:08:42 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 20:08:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:43 compute-0 python3.9[101612]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:43 compute-0 ceph-mon[75144]: 4.1f scrub starts
Nov 25 20:08:43 compute-0 ceph-mon[75144]: 4.1f scrub ok
Nov 25 20:08:43 compute-0 ceph-mon[75144]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:44 compute-0 python3.9[101762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:08:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:45 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 20:08:45 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 20:08:45 compute-0 python3.9[101916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:08:45 compute-0 ceph-mon[75144]: pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:46 compute-0 sudo[102072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbxfgnuukjmfvjzqnpwzeortgjvuphvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101326.1919436-125-259465471511309/AnsiballZ_setup.py'
Nov 25 20:08:46 compute-0 sudo[102072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:46 compute-0 ceph-mon[75144]: 6.3 scrub starts
Nov 25 20:08:46 compute-0 ceph-mon[75144]: 6.3 scrub ok
Nov 25 20:08:46 compute-0 python3.9[102074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:08:47 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 20:08:47 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 20:08:47 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.9 scrub starts
Nov 25 20:08:47 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.9 scrub ok
Nov 25 20:08:47 compute-0 sudo[102072]: pam_unix(sudo:session): session closed for user root
Nov 25 20:08:47 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 20:08:47 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 20:08:47 compute-0 sudo[102156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftrsxfmthtixandvgzjmvcpkwzqrlblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101326.1919436-125-259465471511309/AnsiballZ_dnf.py'
Nov 25 20:08:47 compute-0 sudo[102156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:08:47 compute-0 ceph-mon[75144]: pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:47 compute-0 ceph-mon[75144]: 7.11 scrub starts
Nov 25 20:08:47 compute-0 ceph-mon[75144]: 7.11 scrub ok
Nov 25 20:08:47 compute-0 python3.9[102158]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:08:48 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.a scrub starts
Nov 25 20:08:48 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.a scrub ok
Nov 25 20:08:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:48 compute-0 ceph-mon[75144]: 1.9 scrub starts
Nov 25 20:08:48 compute-0 ceph-mon[75144]: 1.9 scrub ok
Nov 25 20:08:48 compute-0 ceph-mon[75144]: 6.5 scrub starts
Nov 25 20:08:48 compute-0 ceph-mon[75144]: 6.5 scrub ok
Nov 25 20:08:49 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.14 scrub starts
Nov 25 20:08:49 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.14 scrub ok
Nov 25 20:08:49 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 25 20:08:49 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 25 20:08:49 compute-0 ceph-mon[75144]: 1.a scrub starts
Nov 25 20:08:49 compute-0 ceph-mon[75144]: 1.a scrub ok
Nov 25 20:08:49 compute-0 ceph-mon[75144]: pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:49 compute-0 ceph-mon[75144]: 1.14 scrub starts
Nov 25 20:08:49 compute-0 ceph-mon[75144]: 1.14 scrub ok
Nov 25 20:08:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v115: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:50 compute-0 ceph-mon[75144]: 6.7 scrub starts
Nov 25 20:08:50 compute-0 ceph-mon[75144]: 6.7 scrub ok
Nov 25 20:08:51 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 25 20:08:51 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 25 20:08:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:51 compute-0 ceph-mon[75144]: pgmap v115: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:51 compute-0 ceph-mon[75144]: 7.15 scrub starts
Nov 25 20:08:51 compute-0 ceph-mon[75144]: 7.15 scrub ok
Nov 25 20:08:52 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 20:08:52 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 20:08:52 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.b scrub starts
Nov 25 20:08:52 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.b scrub ok
Nov 25 20:08:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:52 compute-0 ceph-mon[75144]: 3.16 scrub starts
Nov 25 20:08:52 compute-0 ceph-mon[75144]: 3.16 scrub ok
Nov 25 20:08:52 compute-0 ceph-mon[75144]: 1.b scrub starts
Nov 25 20:08:52 compute-0 ceph-mon[75144]: 1.b scrub ok
Nov 25 20:08:53 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.c scrub starts
Nov 25 20:08:53 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.c scrub ok
Nov 25 20:08:53 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 25 20:08:53 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 25 20:08:53 compute-0 ceph-mon[75144]: pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:53 compute-0 ceph-mon[75144]: 1.c scrub starts
Nov 25 20:08:53 compute-0 ceph-mon[75144]: 1.c scrub ok
Nov 25 20:08:53 compute-0 ceph-mon[75144]: 6.9 scrub starts
Nov 25 20:08:53 compute-0 ceph-mon[75144]: 6.9 scrub ok
Nov 25 20:08:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:55 compute-0 ceph-mon[75144]: pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:08:56 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 20:08:56 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:08:56
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta']
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 3/10 changes
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Executing plan auto_2025-11-25_20:08:56
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] ceph osd pg-upmap-items 1.4 mappings [{'from': 1, 'to': 2}]
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] ceph osd pg-upmap-items 1.9 mappings [{'from': 1, 'to': 0}]
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [balancer INFO root] ceph osd pg-upmap-items 1.e mappings [{'from': 1, 'to': 2}]
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.4", "id": [1, 2]} v 0) v1
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.4", "id": [1, 2]}]: dispatch
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.9", "id": [1, 0]} v 0) v1
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.9", "id": [1, 0]}]: dispatch
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.e", "id": [1, 2]} v 0) v1
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.e", "id": [1, 2]}]: dispatch
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:08:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 25 20:08:56 compute-0 ceph-mon[75144]: 6.a scrub starts
Nov 25 20:08:56 compute-0 ceph-mon[75144]: 6.a scrub ok
Nov 25 20:08:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.4", "id": [1, 2]}]: dispatch
Nov 25 20:08:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.9", "id": [1, 0]}]: dispatch
Nov 25 20:08:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.e", "id": [1, 2]}]: dispatch
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.4", "id": [1, 2]}]': finished
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.9", "id": [1, 0]}]': finished
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.e", "id": [1, 2]}]': finished
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e38 crush map has features 3314933000854323200, adjusting msgr requires
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 25 20:08:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 25 20:08:56 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 38 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 38 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 38 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 38 pg[1.9( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=15.081516266s) [0] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 130.625122070s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 38 pg[1.9( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=15.081411362s) [0] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.625122070s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 38 pg[1.4( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=15.081186295s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 130.624938965s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 38 pg[1.4( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=15.081116676s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.624938965s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 38 pg[1.e( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=15.074785233s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 130.619293213s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 20:08:56 compute-0 ceph-osd[90092]: osd.1 pg_epoch: 38 pg[1.e( empty local-lis/les=30/31 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=15.074758530s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.619293213s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 20:08:56 compute-0 ceph-osd[89084]: osd.0 38 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 25 20:08:56 compute-0 ceph-osd[89084]: osd.0 38 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 25 20:08:56 compute-0 ceph-osd[89084]: osd.0 38 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 25 20:08:56 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 38 pg[1.9( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38) [0] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:56 compute-0 ceph-osd[91367]: osd.2 38 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 25 20:08:56 compute-0 ceph-osd[91367]: osd.2 38 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 25 20:08:56 compute-0 ceph-osd[91367]: osd.2 38 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 25 20:08:56 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 38 pg[1.e( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38) [2] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:56 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 38 pg[1.4( empty local-lis/les=0/0 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38) [2] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 20:08:57 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 20:08:57 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 20:08:57 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 25 20:08:57 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 25 20:08:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 25 20:08:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 20:08:57 compute-0 ceph-mon[75144]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 25 20:08:57 compute-0 ceph-mon[75144]: pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.4", "id": [1, 2]}]': finished
Nov 25 20:08:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.9", "id": [1, 0]}]': finished
Nov 25 20:08:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.e", "id": [1, 2]}]': finished
Nov 25 20:08:57 compute-0 ceph-mon[75144]: osdmap e38: 3 total, 3 up, 3 in
Nov 25 20:08:57 compute-0 ceph-mon[75144]: 7.c scrub starts
Nov 25 20:08:57 compute-0 ceph-mon[75144]: 7.c scrub ok
Nov 25 20:08:57 compute-0 ceph-mon[75144]: 6.10 scrub starts
Nov 25 20:08:57 compute-0 ceph-mon[75144]: 6.10 scrub ok
Nov 25 20:08:57 compute-0 ceph-osd[89084]: osd.0 pg_epoch: 39 pg[1.9( empty local-lis/les=38/39 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38) [0] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:57 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 39 pg[1.4( empty local-lis/les=38/39 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38) [2] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:57 compute-0 ceph-osd[91367]: osd.2 pg_epoch: 39 pg[1.e( empty local-lis/les=38/39 n=0 ec=30/11 lis/c=30/30 les/c/f=31/31/0 sis=38) [2] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 20:08:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v121: 193 pgs: 2 peering, 191 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:58 compute-0 ceph-mon[75144]: osdmap e39: 3 total, 3 up, 3 in
Nov 25 20:08:58 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 20:08:59 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 20:08:59 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Nov 25 20:08:59 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Nov 25 20:08:59 compute-0 ceph-mon[75144]: pgmap v121: 193 pgs: 2 peering, 191 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:08:59 compute-0 ceph-mon[75144]: 3.b scrub starts
Nov 25 20:08:59 compute-0 ceph-mon[75144]: 3.b scrub ok
Nov 25 20:08:59 compute-0 ceph-mon[75144]: 3.8 deep-scrub starts
Nov 25 20:08:59 compute-0 ceph-mon[75144]: 3.8 deep-scrub ok
Nov 25 20:09:00 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 20:09:00 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 20:09:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v122: 193 pgs: 2 peering, 191 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:01 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 20:09:01 compute-0 ceph-mon[75144]: 7.e scrub starts
Nov 25 20:09:01 compute-0 ceph-mon[75144]: 7.e scrub ok
Nov 25 20:09:01 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 20:09:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:09:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.13 scrub starts
Nov 25 20:09:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.13 scrub ok
Nov 25 20:09:02 compute-0 ceph-mon[75144]: pgmap v122: 193 pgs: 2 peering, 191 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:02 compute-0 ceph-mon[75144]: 7.2 scrub starts
Nov 25 20:09:02 compute-0 ceph-mon[75144]: 7.2 scrub ok
Nov 25 20:09:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v123: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:03 compute-0 ceph-mon[75144]: 1.13 scrub starts
Nov 25 20:09:03 compute-0 ceph-mon[75144]: 1.13 scrub ok
Nov 25 20:09:03 compute-0 ceph-mon[75144]: pgmap v123: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:03 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 25 20:09:03 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 25 20:09:04 compute-0 ceph-mon[75144]: 6.12 scrub starts
Nov 25 20:09:04 compute-0 ceph-mon[75144]: 6.12 scrub ok
Nov 25 20:09:04 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 20:09:04 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 20:09:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v124: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:05 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 20:09:05 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 20:09:05 compute-0 ceph-mon[75144]: 7.5 scrub starts
Nov 25 20:09:05 compute-0 ceph-mon[75144]: 7.5 scrub ok
Nov 25 20:09:05 compute-0 ceph-mon[75144]: pgmap v124: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:05 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 25 20:09:05 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 25 20:09:06 compute-0 ceph-mon[75144]: 3.d scrub starts
Nov 25 20:09:06 compute-0 ceph-mon[75144]: 3.d scrub ok
Nov 25 20:09:06 compute-0 ceph-mon[75144]: 7.8 scrub starts
Nov 25 20:09:06 compute-0 ceph-mon[75144]: 7.8 scrub ok
Nov 25 20:09:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v125: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:07 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.15 scrub starts
Nov 25 20:09:07 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.15 scrub ok
Nov 25 20:09:07 compute-0 ceph-mon[75144]: pgmap v125: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 25 20:09:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 25 20:09:08 compute-0 ceph-mon[75144]: 1.15 scrub starts
Nov 25 20:09:08 compute-0 ceph-mon[75144]: 1.15 scrub ok
Nov 25 20:09:08 compute-0 ceph-mon[75144]: 3.7 scrub starts
Nov 25 20:09:08 compute-0 ceph-mon[75144]: 3.7 scrub ok
Nov 25 20:09:08 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.5 deep-scrub starts
Nov 25 20:09:08 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.5 deep-scrub ok
Nov 25 20:09:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v126: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:09 compute-0 ceph-mon[75144]: 1.5 deep-scrub starts
Nov 25 20:09:09 compute-0 ceph-mon[75144]: 1.5 deep-scrub ok
Nov 25 20:09:09 compute-0 ceph-mon[75144]: pgmap v126: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:09 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.16 scrub starts
Nov 25 20:09:10 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.16 scrub ok
Nov 25 20:09:10 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 25 20:09:10 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 25 20:09:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v127: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:11 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 20:09:11 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 20:09:11 compute-0 ceph-mon[75144]: 1.16 scrub starts
Nov 25 20:09:11 compute-0 ceph-mon[75144]: 1.16 scrub ok
Nov 25 20:09:11 compute-0 ceph-mon[75144]: 6.16 scrub starts
Nov 25 20:09:11 compute-0 ceph-mon[75144]: 6.16 scrub ok
Nov 25 20:09:11 compute-0 ceph-mon[75144]: pgmap v127: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:11 compute-0 ceph-mon[75144]: 7.1 scrub starts
Nov 25 20:09:11 compute-0 ceph-mon[75144]: 7.1 scrub ok
Nov 25 20:09:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v128: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:12 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 25 20:09:12 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 25 20:09:13 compute-0 ceph-mon[75144]: pgmap v128: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v129: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:14 compute-0 ceph-mon[75144]: 3.10 scrub starts
Nov 25 20:09:14 compute-0 ceph-mon[75144]: 3.10 scrub ok
Nov 25 20:09:15 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 25 20:09:15 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 25 20:09:15 compute-0 ceph-mon[75144]: pgmap v129: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v130: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:16 compute-0 ceph-mon[75144]: 3.5 scrub starts
Nov 25 20:09:16 compute-0 ceph-mon[75144]: 3.5 scrub ok
Nov 25 20:09:17 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 20:09:17 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 20:09:17 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Nov 25 20:09:17 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Nov 25 20:09:17 compute-0 ceph-mon[75144]: pgmap v130: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:18 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.19 scrub starts
Nov 25 20:09:18 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.19 scrub ok
Nov 25 20:09:18 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.f scrub starts
Nov 25 20:09:18 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.f scrub ok
Nov 25 20:09:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v131: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:18 compute-0 ceph-mon[75144]: 3.1d scrub starts
Nov 25 20:09:18 compute-0 ceph-mon[75144]: 3.1d scrub ok
Nov 25 20:09:18 compute-0 ceph-mon[75144]: 6.18 scrub starts
Nov 25 20:09:18 compute-0 ceph-mon[75144]: 6.18 scrub ok
Nov 25 20:09:19 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 25 20:09:19 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 25 20:09:19 compute-0 ceph-mon[75144]: 1.19 scrub starts
Nov 25 20:09:19 compute-0 ceph-mon[75144]: 1.19 scrub ok
Nov 25 20:09:19 compute-0 ceph-mon[75144]: 1.f scrub starts
Nov 25 20:09:19 compute-0 ceph-mon[75144]: 1.f scrub ok
Nov 25 20:09:19 compute-0 ceph-mon[75144]: pgmap v131: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:19 compute-0 ceph-mon[75144]: 6.19 scrub starts
Nov 25 20:09:19 compute-0 ceph-mon[75144]: 6.19 scrub ok
Nov 25 20:09:20 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1a scrub starts
Nov 25 20:09:20 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1a scrub ok
Nov 25 20:09:20 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 20:09:20 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 20:09:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v132: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:20 compute-0 ceph-mon[75144]: 1.1a scrub starts
Nov 25 20:09:20 compute-0 ceph-mon[75144]: 1.1a scrub ok
Nov 25 20:09:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:21 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 20:09:21 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 20:09:21 compute-0 ceph-mon[75144]: 7.1a scrub starts
Nov 25 20:09:21 compute-0 ceph-mon[75144]: 7.1a scrub ok
Nov 25 20:09:21 compute-0 ceph-mon[75144]: pgmap v132: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v133: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:22 compute-0 ceph-mon[75144]: 7.a scrub starts
Nov 25 20:09:22 compute-0 ceph-mon[75144]: 7.a scrub ok
Nov 25 20:09:23 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 25 20:09:23 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 25 20:09:23 compute-0 ceph-mon[75144]: pgmap v133: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:23 compute-0 ceph-mon[75144]: 3.13 scrub starts
Nov 25 20:09:23 compute-0 ceph-mon[75144]: 3.13 scrub ok
Nov 25 20:09:24 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1d scrub starts
Nov 25 20:09:24 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1d scrub ok
Nov 25 20:09:24 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 20:09:24 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 20:09:24 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 25 20:09:24 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 25 20:09:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v134: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:24 compute-0 ceph-mon[75144]: 1.1d scrub starts
Nov 25 20:09:24 compute-0 ceph-mon[75144]: 1.1d scrub ok
Nov 25 20:09:24 compute-0 ceph-mon[75144]: 6.1a scrub starts
Nov 25 20:09:24 compute-0 ceph-mon[75144]: 6.1a scrub ok
Nov 25 20:09:25 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 20:09:25 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 20:09:25 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 20:09:25 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 20:09:25 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 25 20:09:25 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 25 20:09:25 compute-0 ceph-mon[75144]: 3.1e scrub starts
Nov 25 20:09:25 compute-0 ceph-mon[75144]: 3.1e scrub ok
Nov 25 20:09:25 compute-0 ceph-mon[75144]: pgmap v134: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:25 compute-0 ceph-mon[75144]: 3.14 scrub starts
Nov 25 20:09:25 compute-0 ceph-mon[75144]: 3.14 scrub ok
Nov 25 20:09:25 compute-0 ceph-mon[75144]: 6.1b scrub starts
Nov 25 20:09:25 compute-0 ceph-mon[75144]: 6.1b scrub ok
Nov 25 20:09:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:26 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.d scrub starts
Nov 25 20:09:26 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.d scrub ok
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:09:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:09:26 compute-0 ceph-mon[75144]: 3.e scrub starts
Nov 25 20:09:26 compute-0 ceph-mon[75144]: 3.e scrub ok
Nov 25 20:09:27 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.1b scrub starts
Nov 25 20:09:27 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.1b scrub ok
Nov 25 20:09:27 compute-0 ceph-mon[75144]: 1.d scrub starts
Nov 25 20:09:27 compute-0 ceph-mon[75144]: 1.d scrub ok
Nov 25 20:09:27 compute-0 ceph-mon[75144]: pgmap v135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:27 compute-0 ceph-mon[75144]: 1.1b scrub starts
Nov 25 20:09:27 compute-0 ceph-mon[75144]: 1.1b scrub ok
Nov 25 20:09:28 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1e scrub starts
Nov 25 20:09:28 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 1.1e scrub ok
Nov 25 20:09:28 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 20:09:28 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 20:09:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v136: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:28 compute-0 sudo[102156]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:28 compute-0 ceph-mon[75144]: 1.1e scrub starts
Nov 25 20:09:28 compute-0 ceph-mon[75144]: 1.1e scrub ok
Nov 25 20:09:28 compute-0 ceph-mon[75144]: 3.1b scrub starts
Nov 25 20:09:28 compute-0 ceph-mon[75144]: 3.1b scrub ok
Nov 25 20:09:29 compute-0 sudo[102447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sztwyknaxcxjgnosexzjlgmkcqbbseus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101369.0879717-137-131596690202995/AnsiballZ_command.py'
Nov 25 20:09:29 compute-0 sudo[102447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:29 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 25 20:09:29 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 25 20:09:29 compute-0 python3.9[102449]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:29 compute-0 ceph-mon[75144]: pgmap v136: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:30 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 20:09:30 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 20:09:30 compute-0 sudo[102447]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:30 compute-0 ceph-mon[75144]: 7.1c scrub starts
Nov 25 20:09:30 compute-0 ceph-mon[75144]: 7.1c scrub ok
Nov 25 20:09:30 compute-0 ceph-mon[75144]: 3.19 scrub starts
Nov 25 20:09:30 compute-0 ceph-mon[75144]: 3.19 scrub ok
Nov 25 20:09:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:31 compute-0 sudo[102734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdgnrihrankdvcugmbpbcousedmsgxfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101370.7476292-145-265062145210635/AnsiballZ_selinux.py'
Nov 25 20:09:31 compute-0 sudo[102734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:31 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 20:09:31 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 20:09:31 compute-0 python3.9[102736]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 20:09:31 compute-0 sudo[102734]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:31 compute-0 ceph-mon[75144]: pgmap v137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:32 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 25 20:09:32 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 25 20:09:32 compute-0 sudo[102886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uroznahbrxrxwfdfnbxnlgjgfvjxlasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101372.157145-156-237588917946100/AnsiballZ_command.py'
Nov 25 20:09:32 compute-0 sudo[102886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:32 compute-0 python3.9[102888]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 20:09:32 compute-0 sudo[102886]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:33 compute-0 ceph-mon[75144]: 3.18 scrub starts
Nov 25 20:09:33 compute-0 ceph-mon[75144]: 3.18 scrub ok
Nov 25 20:09:33 compute-0 ceph-mon[75144]: 3.1a deep-scrub starts
Nov 25 20:09:33 compute-0 ceph-mon[75144]: 3.1a deep-scrub ok
Nov 25 20:09:33 compute-0 sudo[103038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hurhctyaudgagfvqamravrgkwmkvxomf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101372.9883215-164-107627092259681/AnsiballZ_file.py'
Nov 25 20:09:33 compute-0 sudo[103038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:33 compute-0 python3.9[103040]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:33 compute-0 sudo[103038]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:34 compute-0 ceph-mon[75144]: pgmap v138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:34 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 20:09:34 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 20:09:34 compute-0 sudo[103190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmfqgecwwgcdgrysvtnjmxmsnlcojsvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101373.8639002-172-25424862539106/AnsiballZ_mount.py'
Nov 25 20:09:34 compute-0 sudo[103190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:34 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.18 scrub starts
Nov 25 20:09:34 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.18 scrub ok
Nov 25 20:09:34 compute-0 python3.9[103192]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 20:09:34 compute-0 sudo[103190]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:35 compute-0 ceph-mon[75144]: 3.1c scrub starts
Nov 25 20:09:35 compute-0 ceph-mon[75144]: 3.1c scrub ok
Nov 25 20:09:35 compute-0 ceph-mon[75144]: pgmap v139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:35 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 25 20:09:35 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 25 20:09:35 compute-0 sudo[103342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agoajrrshbhlhthtcnncroyyqyycxwzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101375.357034-200-87919780103575/AnsiballZ_file.py'
Nov 25 20:09:35 compute-0 sudo[103342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:35 compute-0 python3.9[103344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:09:35 compute-0 sudo[103342]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:36 compute-0 ceph-mon[75144]: 1.18 scrub starts
Nov 25 20:09:36 compute-0 ceph-mon[75144]: 1.18 scrub ok
Nov 25 20:09:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:36 compute-0 sudo[103494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgfghixgdyhtmvslkesknpxwhrixgraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101376.110828-208-167910952276777/AnsiballZ_stat.py'
Nov 25 20:09:36 compute-0 sudo[103494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:36 compute-0 python3.9[103496]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:36 compute-0 sudo[103494]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v140: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:37 compute-0 ceph-mon[75144]: 6.15 scrub starts
Nov 25 20:09:37 compute-0 ceph-mon[75144]: 6.15 scrub ok
Nov 25 20:09:37 compute-0 ceph-mon[75144]: pgmap v140: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:37 compute-0 sudo[103572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncqlsktgrwjypbafphastathsnffvleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101376.110828-208-167910952276777/AnsiballZ_file.py'
Nov 25 20:09:37 compute-0 sudo[103572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:37 compute-0 python3.9[103574]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:37 compute-0 sudo[103572]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:38 compute-0 sudo[103724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryvcczhwgbdbhaqnjbcfwnpqrldouknn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101377.835429-229-129030375339490/AnsiballZ_stat.py'
Nov 25 20:09:38 compute-0 sudo[103724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:38 compute-0 python3.9[103726]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:09:38 compute-0 sudo[103724]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:38 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 25 20:09:38 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 25 20:09:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v141: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:39 compute-0 sudo[103878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gppnhtgiitttkodxhafiwlwjdtwfwwst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101378.9010756-242-235598384652407/AnsiballZ_getent.py'
Nov 25 20:09:39 compute-0 sudo[103878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:39 compute-0 python3.9[103880]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 20:09:39 compute-0 sudo[103878]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:39 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 20:09:39 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 20:09:39 compute-0 ceph-mon[75144]: 7.1f deep-scrub starts
Nov 25 20:09:39 compute-0 ceph-mon[75144]: 7.1f deep-scrub ok
Nov 25 20:09:39 compute-0 ceph-mon[75144]: pgmap v141: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:39 compute-0 sudo[103906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:39 compute-0 sudo[103906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:39 compute-0 sudo[103906]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:39 compute-0 sudo[103957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:09:39 compute-0 sudo[103957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:39 compute-0 sudo[103957]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:40 compute-0 sudo[104010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:40 compute-0 sudo[104010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:40 compute-0 sudo[104010]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:40 compute-0 sudo[104059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:09:40 compute-0 sudo[104059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:40 compute-0 sudo[104131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmochiuycmxuehsnfhkoorenacnkpbsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101379.9092767-252-202321221816588/AnsiballZ_getent.py'
Nov 25 20:09:40 compute-0 sudo[104131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:40 compute-0 python3.9[104133]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 20:09:40 compute-0 sudo[104131]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:40 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 20:09:40 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 20:09:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v142: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:40 compute-0 podman[104251]: 2025-11-25 20:09:40.800613861 +0000 UTC m=+0.089535797 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:09:40 compute-0 ceph-mon[75144]: 3.f scrub starts
Nov 25 20:09:40 compute-0 ceph-mon[75144]: 3.f scrub ok
Nov 25 20:09:40 compute-0 podman[104251]: 2025-11-25 20:09:40.919848289 +0000 UTC m=+0.208770205 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:09:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:41 compute-0 sudo[104441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwaawznqffhnnlpudbwmyalpqmydylla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101380.682166-260-211186528201908/AnsiballZ_group.py'
Nov 25 20:09:41 compute-0 sudo[104441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:41 compute-0 python3.9[104447]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 20:09:41 compute-0 sudo[104441]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:41 compute-0 sudo[104059]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:09:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:09:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:41 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 20:09:41 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 20:09:41 compute-0 sudo[104500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:41 compute-0 sudo[104500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:41 compute-0 sudo[104500]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:41 compute-0 sudo[104529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:09:41 compute-0 sudo[104529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:41 compute-0 sudo[104529]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:41 compute-0 sudo[104554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:41 compute-0 sudo[104554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:41 compute-0 sudo[104554]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:41 compute-0 ceph-mon[75144]: 7.4 scrub starts
Nov 25 20:09:41 compute-0 ceph-mon[75144]: 7.4 scrub ok
Nov 25 20:09:41 compute-0 ceph-mon[75144]: pgmap v142: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:41 compute-0 sudo[104609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:09:41 compute-0 sudo[104609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:42 compute-0 sudo[104743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egdfrmusjcxzfulbvpgdlhzduuahrbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101381.8155434-269-52517427076985/AnsiballZ_file.py'
Nov 25 20:09:42 compute-0 sudo[104743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:42 compute-0 python3.9[104745]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 20:09:42 compute-0 sudo[104743]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:42 compute-0 sudo[104609]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:09:42 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:09:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:09:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:42 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6aee6005-c543-4005-b646-8cde81089a8a does not exist
Nov 25 20:09:42 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 1ea5d2dc-d1af-4654-b789-1beeef2627d4 does not exist
Nov 25 20:09:42 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 0a3a61e6-b902-46ec-9289-39065b2f2a27 does not exist
Nov 25 20:09:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:09:42 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:09:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:09:42 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:09:42 compute-0 sudo[104787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:42 compute-0 sudo[104787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:42 compute-0 sudo[104787]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:42 compute-0 sudo[104812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:09:42 compute-0 sudo[104812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:42 compute-0 sudo[104812]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:42 compute-0 sudo[104837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:42 compute-0 sudo[104837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:42 compute-0 sudo[104837]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v143: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:42 compute-0 sudo[104868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:09:42 compute-0 sudo[104868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:42 compute-0 ceph-mon[75144]: 3.c scrub starts
Nov 25 20:09:42 compute-0 ceph-mon[75144]: 3.c scrub ok
Nov 25 20:09:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:09:42 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 20:09:42 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 20:09:43 compute-0 sudo[105052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdbzdijqoxbndcvrisfiqhhgrulhxnfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101382.7829528-280-248671134710702/AnsiballZ_dnf.py'
Nov 25 20:09:43 compute-0 sudo[105052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.185460333 +0000 UTC m=+0.051099103 container create 26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:09:43 compute-0 systemd[1]: Started libpod-conmon-26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734.scope.
Nov 25 20:09:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.166285789 +0000 UTC m=+0.031924569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.270161326 +0000 UTC m=+0.135800166 container init 26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.285286769 +0000 UTC m=+0.150925549 container start 26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.28929949 +0000 UTC m=+0.154938330 container attach 26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:09:43 compute-0 reverent_fermi[105071]: 167 167
Nov 25 20:09:43 compute-0 systemd[1]: libpod-26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734.scope: Deactivated successfully.
Nov 25 20:09:43 compute-0 conmon[105071]: conmon 26262df3d73989460dc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734.scope/container/memory.events
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.293848936 +0000 UTC m=+0.159487706 container died 26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc48fd2cb48d766e5ea00ed2152662159f3951b2b38d7d2074e2a11562bcdb2-merged.mount: Deactivated successfully.
Nov 25 20:09:43 compute-0 podman[105054]: 2025-11-25 20:09:43.332904338 +0000 UTC m=+0.198543088 container remove 26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:09:43 compute-0 systemd[1]: libpod-conmon-26262df3d73989460dc353c4d71135e2d340dfda2f0144b6cfd6a2a13642d734.scope: Deactivated successfully.
Nov 25 20:09:43 compute-0 python3.9[105056]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:09:43 compute-0 podman[105096]: 2025-11-25 20:09:43.542832266 +0000 UTC m=+0.054721093 container create 826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:09:43 compute-0 systemd[1]: Started libpod-conmon-826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e.scope.
Nov 25 20:09:43 compute-0 podman[105096]: 2025-11-25 20:09:43.526532687 +0000 UTC m=+0.038421544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:09:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f35a101ca431229ba6d81250e64f47d1511a5f0eab601b9b2ec444bbb2733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f35a101ca431229ba6d81250e64f47d1511a5f0eab601b9b2ec444bbb2733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f35a101ca431229ba6d81250e64f47d1511a5f0eab601b9b2ec444bbb2733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f35a101ca431229ba6d81250e64f47d1511a5f0eab601b9b2ec444bbb2733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f35a101ca431229ba6d81250e64f47d1511a5f0eab601b9b2ec444bbb2733/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:43 compute-0 podman[105096]: 2025-11-25 20:09:43.643897078 +0000 UTC m=+0.155785925 container init 826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:09:43 compute-0 podman[105096]: 2025-11-25 20:09:43.65127357 +0000 UTC m=+0.163162407 container start 826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:09:43 compute-0 podman[105096]: 2025-11-25 20:09:43.654635551 +0000 UTC m=+0.166524388 container attach 826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:09:43 compute-0 ceph-mon[75144]: pgmap v143: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:44 compute-0 sudo[105052]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v144: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:44 compute-0 charming_leakey[105113]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:09:44 compute-0 charming_leakey[105113]: --> relative data size: 1.0
Nov 25 20:09:44 compute-0 charming_leakey[105113]: --> All data devices are unavailable
Nov 25 20:09:44 compute-0 systemd[1]: libpod-826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e.scope: Deactivated successfully.
Nov 25 20:09:44 compute-0 podman[105096]: 2025-11-25 20:09:44.84548212 +0000 UTC m=+1.357370967 container died 826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:09:44 compute-0 systemd[1]: libpod-826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e.scope: Consumed 1.135s CPU time.
Nov 25 20:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-da0f35a101ca431229ba6d81250e64f47d1511a5f0eab601b9b2ec444bbb2733-merged.mount: Deactivated successfully.
Nov 25 20:09:44 compute-0 ceph-mon[75144]: 7.7 scrub starts
Nov 25 20:09:44 compute-0 ceph-mon[75144]: 7.7 scrub ok
Nov 25 20:09:44 compute-0 podman[105096]: 2025-11-25 20:09:44.914772299 +0000 UTC m=+1.426661126 container remove 826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:09:44 compute-0 systemd[1]: libpod-conmon-826293a1377f41553d373936763ad3e443632a758720723758063425faa19b6e.scope: Deactivated successfully.
Nov 25 20:09:44 compute-0 sudo[104868]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:44 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 25 20:09:44 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 25 20:09:44 compute-0 sudo[105230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:45 compute-0 sudo[105230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:45 compute-0 sudo[105230]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:45 compute-0 sudo[105278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:09:45 compute-0 sudo[105278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:45 compute-0 sudo[105278]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:45 compute-0 sudo[105323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:45 compute-0 sudo[105323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:45 compute-0 sudo[105323]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:45 compute-0 sudo[105398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpimldwianeqiaqrctnrsmegfdgqnbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101384.814956-288-268850251555122/AnsiballZ_file.py'
Nov 25 20:09:45 compute-0 sudo[105398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:45 compute-0 sudo[105363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:09:45 compute-0 sudo[105363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:45 compute-0 python3.9[105403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:09:45 compute-0 sudo[105398]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.620703819 +0000 UTC m=+0.072001062 container create 8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:09:45 compute-0 systemd[1]: Started libpod-conmon-8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6.scope.
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.591998408 +0000 UTC m=+0.043295721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:09:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.711833093 +0000 UTC m=+0.163130316 container init 8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_proskuriakova, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.718147043 +0000 UTC m=+0.169444256 container start 8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_proskuriakova, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.721235515 +0000 UTC m=+0.172532728 container attach 8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_proskuriakova, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 20:09:45 compute-0 gracious_proskuriakova[105512]: 167 167
Nov 25 20:09:45 compute-0 systemd[1]: libpod-8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6.scope: Deactivated successfully.
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.725611366 +0000 UTC m=+0.176908579 container died 8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9878e5717a58bda0e2f9bd0761ef77c1fb38545988f1197545757aaa4e55b2c-merged.mount: Deactivated successfully.
Nov 25 20:09:45 compute-0 podman[105469]: 2025-11-25 20:09:45.762136672 +0000 UTC m=+0.213433885 container remove 8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:09:45 compute-0 systemd[1]: libpod-conmon-8492d550ad225260572beb060152fadd596f087a9e82a8019575df24dc2444c6.scope: Deactivated successfully.
Nov 25 20:09:45 compute-0 ceph-mon[75144]: pgmap v144: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:45 compute-0 ceph-mon[75144]: 7.b deep-scrub starts
Nov 25 20:09:45 compute-0 ceph-mon[75144]: 7.b deep-scrub ok
Nov 25 20:09:45 compute-0 podman[105606]: 2025-11-25 20:09:45.9623661 +0000 UTC m=+0.047022662 container create f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:09:45 compute-0 sudo[105646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkvmxtukrmbmmlgtdpydlykvvwmydzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101385.640446-296-168956278795489/AnsiballZ_stat.py'
Nov 25 20:09:45 compute-0 sudo[105646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:46 compute-0 systemd[1]: Started libpod-conmon-f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958.scope.
Nov 25 20:09:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a4d3e5a36c557cc93e553462b71bf2d82159d70fd9b5d96d126034417a2a5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a4d3e5a36c557cc93e553462b71bf2d82159d70fd9b5d96d126034417a2a5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a4d3e5a36c557cc93e553462b71bf2d82159d70fd9b5d96d126034417a2a5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a4d3e5a36c557cc93e553462b71bf2d82159d70fd9b5d96d126034417a2a5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:46 compute-0 podman[105606]: 2025-11-25 20:09:46.041437542 +0000 UTC m=+0.126094214 container init f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:09:46 compute-0 podman[105606]: 2025-11-25 20:09:45.946741551 +0000 UTC m=+0.031398133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:09:46 compute-0 podman[105606]: 2025-11-25 20:09:46.053895716 +0000 UTC m=+0.138552278 container start f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:09:46 compute-0 podman[105606]: 2025-11-25 20:09:46.05869994 +0000 UTC m=+0.143356522 container attach f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:09:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:46 compute-0 python3.9[105650]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:46 compute-0 sudo[105646]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:46 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 25 20:09:46 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 25 20:09:46 compute-0 sudo[105731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvoaozcopxqugshtlbijwujrtvtgpke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101385.640446-296-168956278795489/AnsiballZ_file.py'
Nov 25 20:09:46 compute-0 sudo[105731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:46 compute-0 python3.9[105733]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:09:46 compute-0 sudo[105731]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v145: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:46 compute-0 vibrant_morse[105651]: {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:     "0": [
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:         {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "devices": [
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "/dev/loop3"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             ],
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_name": "ceph_lv0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_size": "21470642176",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "name": "ceph_lv0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "tags": {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cluster_name": "ceph",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.crush_device_class": "",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.encrypted": "0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osd_id": "0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.type": "block",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.vdo": "0"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             },
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "type": "block",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "vg_name": "ceph_vg0"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:         }
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:     ],
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:     "1": [
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:         {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "devices": [
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "/dev/loop4"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             ],
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_name": "ceph_lv1",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_size": "21470642176",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "name": "ceph_lv1",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "tags": {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cluster_name": "ceph",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.crush_device_class": "",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.encrypted": "0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osd_id": "1",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.type": "block",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.vdo": "0"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             },
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "type": "block",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "vg_name": "ceph_vg1"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:         }
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:     ],
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:     "2": [
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:         {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "devices": [
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "/dev/loop5"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             ],
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_name": "ceph_lv2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_size": "21470642176",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "name": "ceph_lv2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "tags": {
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.cluster_name": "ceph",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.crush_device_class": "",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.encrypted": "0",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osd_id": "2",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.type": "block",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:                 "ceph.vdo": "0"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             },
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "type": "block",
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:             "vg_name": "ceph_vg2"
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:         }
Nov 25 20:09:46 compute-0 vibrant_morse[105651]:     ]
Nov 25 20:09:46 compute-0 vibrant_morse[105651]: }
Nov 25 20:09:46 compute-0 systemd[1]: libpod-f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958.scope: Deactivated successfully.
Nov 25 20:09:46 compute-0 podman[105606]: 2025-11-25 20:09:46.856075614 +0000 UTC m=+0.940732186 container died f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7a4d3e5a36c557cc93e553462b71bf2d82159d70fd9b5d96d126034417a2a5f-merged.mount: Deactivated successfully.
Nov 25 20:09:46 compute-0 podman[105606]: 2025-11-25 20:09:46.926407785 +0000 UTC m=+1.011064347 container remove f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:09:46 compute-0 systemd[1]: libpod-conmon-f47885df1c2139dcf24ede5a28c81f35d3c2ab2e4b7b0478c46a2909d650f958.scope: Deactivated successfully.
Nov 25 20:09:46 compute-0 sudo[105363]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:47 compute-0 sudo[105820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:47 compute-0 sudo[105820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:47 compute-0 sudo[105820]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:47 compute-0 sudo[105865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:09:47 compute-0 sudo[105865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:47 compute-0 sudo[105865]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:47 compute-0 sudo[105909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:47 compute-0 sudo[105909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:47 compute-0 sudo[105909]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:47 compute-0 sudo[105979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsrxyexikpxsgiefgmsqfowfyemoyww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101386.9370046-309-42121829329688/AnsiballZ_stat.py'
Nov 25 20:09:47 compute-0 sudo[105979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:47 compute-0 sudo[105974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:09:47 compute-0 sudo[105974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:47 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 25 20:09:47 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 25 20:09:47 compute-0 python3.9[105990]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:47 compute-0 sudo[105979]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.749722584 +0000 UTC m=+0.055961016 container create 53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:09:47 compute-0 systemd[1]: Started libpod-conmon-53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c.scope.
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.720318107 +0000 UTC m=+0.026556599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:09:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.854095768 +0000 UTC m=+0.160334210 container init 53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.868884363 +0000 UTC m=+0.175122775 container start 53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.872502173 +0000 UTC m=+0.178740715 container attach 53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:09:47 compute-0 cranky_solomon[106109]: 167 167
Nov 25 20:09:47 compute-0 systemd[1]: libpod-53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c.scope: Deactivated successfully.
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.878358964 +0000 UTC m=+0.184597466 container died 53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:09:47 compute-0 ceph-mon[75144]: 4.18 deep-scrub starts
Nov 25 20:09:47 compute-0 ceph-mon[75144]: 4.18 deep-scrub ok
Nov 25 20:09:47 compute-0 ceph-mon[75144]: pgmap v145: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:47 compute-0 sudo[106141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymwtozlsljldxiqsqltattachxczhei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101386.9370046-309-42121829329688/AnsiballZ_file.py'
Nov 25 20:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c452765663d2ee818073244639e7e0603de010d342f803aaa4826e449004bf-merged.mount: Deactivated successfully.
Nov 25 20:09:47 compute-0 sudo[106141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:47 compute-0 podman[106069]: 2025-11-25 20:09:47.937364353 +0000 UTC m=+0.243602765 container remove 53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:09:47 compute-0 systemd[1]: libpod-conmon-53562d62e8426bdb5b76095874b4e853c970f1013b619fa529226f79b174fa6c.scope: Deactivated successfully.
Nov 25 20:09:48 compute-0 python3.9[106154]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:09:48 compute-0 podman[106162]: 2025-11-25 20:09:48.16689901 +0000 UTC m=+0.072496700 container create e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:09:48 compute-0 sudo[106141]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:48 compute-0 systemd[1]: Started libpod-conmon-e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d.scope.
Nov 25 20:09:48 compute-0 podman[106162]: 2025-11-25 20:09:48.139387775 +0000 UTC m=+0.044985515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:09:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92855c8bfe82e5e3492c090159e4fb1413edf77f266779f5a776ba16dd2d7d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92855c8bfe82e5e3492c090159e4fb1413edf77f266779f5a776ba16dd2d7d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92855c8bfe82e5e3492c090159e4fb1413edf77f266779f5a776ba16dd2d7d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92855c8bfe82e5e3492c090159e4fb1413edf77f266779f5a776ba16dd2d7d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:09:48 compute-0 podman[106162]: 2025-11-25 20:09:48.258690408 +0000 UTC m=+0.164288068 container init e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:09:48 compute-0 podman[106162]: 2025-11-25 20:09:48.267669645 +0000 UTC m=+0.173267335 container start e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:09:48 compute-0 podman[106162]: 2025-11-25 20:09:48.270928355 +0000 UTC m=+0.176526025 container attach e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:09:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v146: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:48 compute-0 sudo[106334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfzdacgqgbaitzkfzbhpxjcciuqkoygy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101388.5136912-324-22242214284973/AnsiballZ_dnf.py'
Nov 25 20:09:48 compute-0 sudo[106334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:48 compute-0 ceph-mon[75144]: 6.14 scrub starts
Nov 25 20:09:48 compute-0 ceph-mon[75144]: 6.14 scrub ok
Nov 25 20:09:49 compute-0 python3.9[106337]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:09:49 compute-0 nice_noether[106192]: {
Nov 25 20:09:49 compute-0 nice_noether[106192]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "osd_id": 2,
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "type": "bluestore"
Nov 25 20:09:49 compute-0 nice_noether[106192]:     },
Nov 25 20:09:49 compute-0 nice_noether[106192]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "osd_id": 1,
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "type": "bluestore"
Nov 25 20:09:49 compute-0 nice_noether[106192]:     },
Nov 25 20:09:49 compute-0 nice_noether[106192]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "osd_id": 0,
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:09:49 compute-0 nice_noether[106192]:         "type": "bluestore"
Nov 25 20:09:49 compute-0 nice_noether[106192]:     }
Nov 25 20:09:49 compute-0 nice_noether[106192]: }
Nov 25 20:09:49 compute-0 systemd[1]: libpod-e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d.scope: Deactivated successfully.
Nov 25 20:09:49 compute-0 systemd[1]: libpod-e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d.scope: Consumed 1.069s CPU time.
Nov 25 20:09:49 compute-0 conmon[106192]: conmon e47ad370246ebe37b3c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d.scope/container/memory.events
Nov 25 20:09:49 compute-0 podman[106162]: 2025-11-25 20:09:49.33866711 +0000 UTC m=+1.244264760 container died e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:09:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-92855c8bfe82e5e3492c090159e4fb1413edf77f266779f5a776ba16dd2d7d7b-merged.mount: Deactivated successfully.
Nov 25 20:09:49 compute-0 podman[106162]: 2025-11-25 20:09:49.388521398 +0000 UTC m=+1.294119048 container remove e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:09:49 compute-0 systemd[1]: libpod-conmon-e47ad370246ebe37b3c2624ae148293cd5c90d9c66416ac1ac4cfc823c91d32d.scope: Deactivated successfully.
Nov 25 20:09:49 compute-0 sudo[105974]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:09:49 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:09:49 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:49 compute-0 sudo[106380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:09:49 compute-0 sudo[106380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:49 compute-0 sudo[106380]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:49 compute-0 sudo[106405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:09:49 compute-0 sudo[106405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:09:49 compute-0 sudo[106405]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:49 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 25 20:09:49 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 25 20:09:49 compute-0 ceph-mon[75144]: pgmap v146: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:49 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:49 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:09:49 compute-0 ceph-mon[75144]: 3.1 scrub starts
Nov 25 20:09:49 compute-0 ceph-mon[75144]: 3.1 scrub ok
Nov 25 20:09:50 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 20:09:50 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 20:09:50 compute-0 sudo[106334]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v147: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:50 compute-0 ceph-mon[75144]: 7.d scrub starts
Nov 25 20:09:50 compute-0 ceph-mon[75144]: 7.d scrub ok
Nov 25 20:09:50 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 20:09:50 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 20:09:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:51 compute-0 python3.9[106579]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:09:51 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 20:09:51 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 20:09:51 compute-0 ceph-mon[75144]: pgmap v147: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:51 compute-0 ceph-mon[75144]: 7.10 scrub starts
Nov 25 20:09:51 compute-0 ceph-mon[75144]: 7.10 scrub ok
Nov 25 20:09:51 compute-0 ceph-mon[75144]: 7.18 scrub starts
Nov 25 20:09:51 compute-0 ceph-mon[75144]: 7.18 scrub ok
Nov 25 20:09:52 compute-0 python3.9[106731]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 20:09:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v148: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:52 compute-0 python3.9[106881]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:09:53 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 20:09:53 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 20:09:53 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 25 20:09:53 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 25 20:09:53 compute-0 ceph-mon[75144]: pgmap v148: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:53 compute-0 ceph-mon[75144]: 7.9 deep-scrub starts
Nov 25 20:09:53 compute-0 ceph-mon[75144]: 7.9 deep-scrub ok
Nov 25 20:09:54 compute-0 sudo[107031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olmjsohwyllvknykqzqiklyddaetdxna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101393.4551985-365-163883497313798/AnsiballZ_systemd.py'
Nov 25 20:09:54 compute-0 sudo[107031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:54 compute-0 python3.9[107033]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:09:54 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 20:09:54 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 20:09:54 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 20:09:54 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 20:09:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v149: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:54 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 20:09:54 compute-0 sudo[107031]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:54 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 25 20:09:54 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 25 20:09:55 compute-0 ceph-mon[75144]: 4.11 scrub starts
Nov 25 20:09:55 compute-0 ceph-mon[75144]: 4.11 scrub ok
Nov 25 20:09:55 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 20:09:55 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 20:09:55 compute-0 python3.9[107195]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 20:09:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:09:56 compute-0 ceph-mon[75144]: pgmap v149: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:56 compute-0 ceph-mon[75144]: 7.12 deep-scrub starts
Nov 25 20:09:56 compute-0 ceph-mon[75144]: 7.12 deep-scrub ok
Nov 25 20:09:56 compute-0 ceph-mon[75144]: 7.6 scrub starts
Nov 25 20:09:56 compute-0 ceph-mon[75144]: 7.6 scrub ok
Nov 25 20:09:56 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Nov 25 20:09:56 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v150: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:09:56
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', 'vms', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Nov 25 20:09:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:09:57 compute-0 ceph-mon[75144]: pgmap v150: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:57 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 25 20:09:57 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 25 20:09:57 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 20:09:57 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 20:09:58 compute-0 sudo[107345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvqmvzdcpgpnrfksdibrnnoudiqepxkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101397.6188064-422-226138942226363/AnsiballZ_systemd.py'
Nov 25 20:09:58 compute-0 sudo[107345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:58 compute-0 ceph-mon[75144]: 6.11 scrub starts
Nov 25 20:09:58 compute-0 ceph-mon[75144]: 6.11 scrub ok
Nov 25 20:09:58 compute-0 ceph-mon[75144]: 3.3 scrub starts
Nov 25 20:09:58 compute-0 ceph-mon[75144]: 3.3 scrub ok
Nov 25 20:09:58 compute-0 python3.9[107347]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:09:58 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 20:09:58 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 20:09:58 compute-0 sudo[107345]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:58 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.1f deep-scrub starts
Nov 25 20:09:58 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.1f deep-scrub ok
Nov 25 20:09:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v151: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:59 compute-0 sudo[107499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sabtbegfnhylwxgaercifilmznwynpam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101398.6244302-422-176813177258704/AnsiballZ_systemd.py'
Nov 25 20:09:59 compute-0 sudo[107499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:09:59 compute-0 ceph-mon[75144]: 7.14 scrub starts
Nov 25 20:09:59 compute-0 ceph-mon[75144]: 7.14 scrub ok
Nov 25 20:09:59 compute-0 ceph-mon[75144]: 1.1f deep-scrub starts
Nov 25 20:09:59 compute-0 ceph-mon[75144]: 1.1f deep-scrub ok
Nov 25 20:09:59 compute-0 ceph-mon[75144]: pgmap v151: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:09:59 compute-0 python3.9[107501]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:09:59 compute-0 sudo[107499]: pam_unix(sudo:session): session closed for user root
Nov 25 20:09:59 compute-0 sshd-session[99474]: Connection closed by 192.168.122.30 port 52656
Nov 25 20:09:59 compute-0 sshd-session[99462]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:09:59 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 20:09:59 compute-0 systemd[1]: session-35.scope: Consumed 1min 7.002s CPU time.
Nov 25 20:09:59 compute-0 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Nov 25 20:09:59 compute-0 systemd-logind[789]: Removed session 35.
Nov 25 20:10:00 compute-0 ceph-mon[75144]: 4.13 scrub starts
Nov 25 20:10:00 compute-0 ceph-mon[75144]: 4.13 scrub ok
Nov 25 20:10:00 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.1c scrub starts
Nov 25 20:10:00 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.1c scrub ok
Nov 25 20:10:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v152: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:01 compute-0 ceph-mon[75144]: 1.1c scrub starts
Nov 25 20:10:01 compute-0 ceph-mon[75144]: 1.1c scrub ok
Nov 25 20:10:01 compute-0 ceph-mon[75144]: pgmap v152: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:01 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 20:10:01 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:10:02 compute-0 ceph-mon[75144]: 4.e scrub starts
Nov 25 20:10:02 compute-0 ceph-mon[75144]: 4.e scrub ok
Nov 25 20:10:02 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 20:10:02 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 20:10:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v153: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:03 compute-0 ceph-mon[75144]: 3.6 scrub starts
Nov 25 20:10:03 compute-0 ceph-mon[75144]: 3.6 scrub ok
Nov 25 20:10:03 compute-0 ceph-mon[75144]: pgmap v153: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:03 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Nov 25 20:10:03 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Nov 25 20:10:04 compute-0 ceph-mon[75144]: 7.3 deep-scrub starts
Nov 25 20:10:04 compute-0 ceph-mon[75144]: 7.3 deep-scrub ok
Nov 25 20:10:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v154: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:05 compute-0 sshd-session[107528]: Accepted publickey for zuul from 192.168.122.30 port 43796 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:10:05 compute-0 systemd-logind[789]: New session 36 of user zuul.
Nov 25 20:10:05 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 25 20:10:05 compute-0 sshd-session[107528]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:10:05 compute-0 ceph-mon[75144]: pgmap v154: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:05 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 20:10:05 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 20:10:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:06 compute-0 python3.9[107681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:10:06 compute-0 ceph-mon[75144]: 7.f scrub starts
Nov 25 20:10:06 compute-0 ceph-mon[75144]: 7.f scrub ok
Nov 25 20:10:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v155: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:06 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 20:10:06 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 20:10:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 20:10:07 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 20:10:07 compute-0 ceph-mon[75144]: pgmap v155: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:07 compute-0 sudo[107835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanfnevnytlnatbenatmrhoguprzygyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101407.0408401-36-196670492793665/AnsiballZ_getent.py'
Nov 25 20:10:07 compute-0 sudo[107835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:07 compute-0 python3.9[107837]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 20:10:07 compute-0 sudo[107835]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:07 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 25 20:10:07 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 25 20:10:08 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 20:10:08 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 20:10:08 compute-0 sudo[107988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odwuusiogmczuvudunfclrkqxiqozpax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101408.1239176-48-268255016831139/AnsiballZ_setup.py'
Nov 25 20:10:08 compute-0 sudo[107988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:08 compute-0 ceph-mon[75144]: 7.16 scrub starts
Nov 25 20:10:08 compute-0 ceph-mon[75144]: 7.16 scrub ok
Nov 25 20:10:08 compute-0 ceph-mon[75144]: 6.f scrub starts
Nov 25 20:10:08 compute-0 ceph-mon[75144]: 6.f scrub ok
Nov 25 20:10:08 compute-0 python3.9[107990]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:10:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v156: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:09 compute-0 sudo[107988]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:09 compute-0 ceph-mon[75144]: 7.17 deep-scrub starts
Nov 25 20:10:09 compute-0 ceph-mon[75144]: 7.17 deep-scrub ok
Nov 25 20:10:09 compute-0 ceph-mon[75144]: 4.1a scrub starts
Nov 25 20:10:09 compute-0 ceph-mon[75144]: 4.1a scrub ok
Nov 25 20:10:09 compute-0 ceph-mon[75144]: pgmap v156: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:09 compute-0 sudo[108072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmeumsdzxfsimcqolldxfjufapjpqtve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101408.1239176-48-268255016831139/AnsiballZ_dnf.py'
Nov 25 20:10:09 compute-0 sudo[108072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:09 compute-0 python3.9[108074]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 20:10:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v157: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:10 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 20:10:10 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 20:10:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:11 compute-0 sudo[108072]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:11 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 20:10:11 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 20:10:11 compute-0 ceph-mon[75144]: pgmap v157: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:11 compute-0 sudo[108225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjftzkwpeeghvvcyvwiusmlkdekbirfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101411.597792-62-93532163678385/AnsiballZ_dnf.py'
Nov 25 20:10:11 compute-0 sudo[108225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:12 compute-0 python3.9[108227]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:10:12 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 25 20:10:12 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 25 20:10:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v158: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:12 compute-0 ceph-mon[75144]: 7.19 scrub starts
Nov 25 20:10:12 compute-0 ceph-mon[75144]: 7.19 scrub ok
Nov 25 20:10:12 compute-0 ceph-mon[75144]: 6.8 scrub starts
Nov 25 20:10:12 compute-0 ceph-mon[75144]: 6.8 scrub ok
Nov 25 20:10:13 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 20:10:13 compute-0 sudo[108225]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:13 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 20:10:13 compute-0 ceph-mon[75144]: 3.a scrub starts
Nov 25 20:10:13 compute-0 ceph-mon[75144]: 3.a scrub ok
Nov 25 20:10:13 compute-0 ceph-mon[75144]: pgmap v158: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:14 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 25 20:10:14 compute-0 sudo[108378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdewsjhkqhnajwkfxcgyqknycqdomdia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101413.6635091-70-45133278416516/AnsiballZ_systemd.py'
Nov 25 20:10:14 compute-0 sudo[108378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:14 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 25 20:10:14 compute-0 python3.9[108380]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:10:14 compute-0 sudo[108378]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v159: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:14 compute-0 ceph-mon[75144]: 4.a scrub starts
Nov 25 20:10:14 compute-0 ceph-mon[75144]: 4.a scrub ok
Nov 25 20:10:15 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 25 20:10:15 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 25 20:10:15 compute-0 python3.9[108533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:10:15 compute-0 ceph-mon[75144]: 4.1c scrub starts
Nov 25 20:10:15 compute-0 ceph-mon[75144]: 4.1c scrub ok
Nov 25 20:10:15 compute-0 ceph-mon[75144]: pgmap v159: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:15 compute-0 ceph-mon[75144]: 3.9 scrub starts
Nov 25 20:10:15 compute-0 ceph-mon[75144]: 3.9 scrub ok
Nov 25 20:10:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:16 compute-0 sudo[108683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygrzivedwarehfjocqlkvxtkeykapihr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101416.0132852-88-254840521300592/AnsiballZ_sefcontext.py'
Nov 25 20:10:16 compute-0 sudo[108683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:16 compute-0 python3.9[108685]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 20:10:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v160: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:16 compute-0 sudo[108683]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:17 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 25 20:10:17 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 25 20:10:17 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 20:10:17 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 20:10:17 compute-0 python3.9[108835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:10:17 compute-0 ceph-mon[75144]: pgmap v160: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:17 compute-0 ceph-mon[75144]: 6.1f scrub starts
Nov 25 20:10:17 compute-0 ceph-mon[75144]: 7.13 scrub starts
Nov 25 20:10:17 compute-0 ceph-mon[75144]: 7.13 scrub ok
Nov 25 20:10:18 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 25 20:10:18 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 25 20:10:18 compute-0 sudo[108991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwphwzilvdwpfmnouvrjqkfqzzzyjqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101418.3155358-106-220674292355673/AnsiballZ_dnf.py'
Nov 25 20:10:18 compute-0 sudo[108991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v161: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:18 compute-0 python3.9[108993]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:10:18 compute-0 ceph-mon[75144]: 6.1f scrub ok
Nov 25 20:10:18 compute-0 ceph-mon[75144]: 6.13 scrub starts
Nov 25 20:10:18 compute-0 ceph-mon[75144]: 6.13 scrub ok
Nov 25 20:10:18 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Nov 25 20:10:18 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Nov 25 20:10:19 compute-0 ceph-mon[75144]: pgmap v161: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:19 compute-0 ceph-mon[75144]: 7.1d deep-scrub starts
Nov 25 20:10:19 compute-0 ceph-mon[75144]: 7.1d deep-scrub ok
Nov 25 20:10:19 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 20:10:19 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 20:10:20 compute-0 sudo[108991]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:20 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 20:10:20 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 20:10:20 compute-0 sudo[109144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stpvshyldhaohonojbagorsbqwmadglf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101420.3553956-114-134980628350284/AnsiballZ_command.py'
Nov 25 20:10:20 compute-0 sudo[109144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v162: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:20 compute-0 ceph-mon[75144]: 7.1e scrub starts
Nov 25 20:10:20 compute-0 ceph-mon[75144]: 7.1e scrub ok
Nov 25 20:10:21 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 20:10:21 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 20:10:21 compute-0 python3.9[109146]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:10:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:21 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.e scrub starts
Nov 25 20:10:21 compute-0 ceph-osd[91367]: log_channel(cluster) log [DBG] : 1.e scrub ok
Nov 25 20:10:21 compute-0 sudo[109144]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:21 compute-0 ceph-mon[75144]: 4.1b scrub starts
Nov 25 20:10:21 compute-0 ceph-mon[75144]: 4.1b scrub ok
Nov 25 20:10:21 compute-0 ceph-mon[75144]: pgmap v162: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:21 compute-0 ceph-mon[75144]: 5.9 scrub starts
Nov 25 20:10:21 compute-0 ceph-mon[75144]: 5.9 scrub ok
Nov 25 20:10:22 compute-0 sudo[109431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftitlqyzvqxjfktxxpaxnwaxyoypzzoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101422.0505672-122-254689806940511/AnsiballZ_file.py'
Nov 25 20:10:22 compute-0 sudo[109431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:22 compute-0 python3.9[109433]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 20:10:22 compute-0 sudo[109431]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v163: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:22 compute-0 ceph-mon[75144]: 1.e scrub starts
Nov 25 20:10:22 compute-0 ceph-mon[75144]: 1.e scrub ok
Nov 25 20:10:23 compute-0 python3.9[109583]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:10:23 compute-0 ceph-mon[75144]: pgmap v163: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:24 compute-0 sudo[109735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atnotvqgqttxqbroxemxxtbhqozyfjhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101423.829613-138-3026012247915/AnsiballZ_dnf.py'
Nov 25 20:10:24 compute-0 sudo[109735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:24 compute-0 python3.9[109737]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:10:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v164: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:24 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 25 20:10:24 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 25 20:10:25 compute-0 sudo[109735]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:25 compute-0 ceph-mon[75144]: pgmap v164: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:25 compute-0 ceph-mon[75144]: 5.18 scrub starts
Nov 25 20:10:25 compute-0 ceph-mon[75144]: 5.18 scrub ok
Nov 25 20:10:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:26 compute-0 sudo[109888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqilkenhncogvufrhqjdnaowbhluxtoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101426.0578775-147-185575956865138/AnsiballZ_dnf.py'
Nov 25 20:10:26 compute-0 sudo[109888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:26 compute-0 python3.9[109890]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v165: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:10:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:10:27 compute-0 sudo[109888]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:27 compute-0 ceph-mon[75144]: pgmap v165: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:28 compute-0 sudo[110041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oblkpqnwkrkgvtzwcvmnfvwskbfuahhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101428.2384667-159-161132110151775/AnsiballZ_stat.py'
Nov 25 20:10:28 compute-0 sudo[110041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:28 compute-0 python3.9[110043]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:10:28 compute-0 sudo[110041]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v166: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:29 compute-0 sudo[110195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exsreawcgrbadvxgfvnmkqbgivnnqdnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101428.9618304-167-61199030292108/AnsiballZ_slurp.py'
Nov 25 20:10:29 compute-0 sudo[110195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:29 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.17 scrub starts
Nov 25 20:10:29 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.17 scrub ok
Nov 25 20:10:30 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 20:10:30 compute-0 ceph-mon[75144]: pgmap v166: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:30 compute-0 ceph-mon[75144]: 1.17 scrub starts
Nov 25 20:10:30 compute-0 ceph-mon[75144]: 1.17 scrub ok
Nov 25 20:10:30 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 20:10:30 compute-0 python3.9[110197]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 20:10:30 compute-0 sudo[110195]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v167: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:31 compute-0 ceph-mon[75144]: 5.1d scrub starts
Nov 25 20:10:31 compute-0 ceph-mon[75144]: 5.1d scrub ok
Nov 25 20:10:31 compute-0 ceph-mon[75144]: pgmap v167: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:31 compute-0 sshd-session[107531]: Connection closed by 192.168.122.30 port 43796
Nov 25 20:10:31 compute-0 sshd-session[107528]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:10:31 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 20:10:31 compute-0 systemd-logind[789]: Session 36 logged out. Waiting for processes to exit.
Nov 25 20:10:31 compute-0 systemd[1]: session-36.scope: Consumed 19.260s CPU time.
Nov 25 20:10:31 compute-0 systemd-logind[789]: Removed session 36.
Nov 25 20:10:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v168: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:33 compute-0 ceph-mon[75144]: pgmap v168: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v169: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:35 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 20:10:35 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 20:10:35 compute-0 ceph-mon[75144]: pgmap v169: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:36 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.11 scrub starts
Nov 25 20:10:36 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.11 scrub ok
Nov 25 20:10:36 compute-0 sshd-session[110222]: Accepted publickey for zuul from 192.168.122.30 port 34944 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:10:36 compute-0 systemd-logind[789]: New session 37 of user zuul.
Nov 25 20:10:36 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 25 20:10:36 compute-0 sshd-session[110222]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:10:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v170: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:36 compute-0 ceph-mon[75144]: 3.15 scrub starts
Nov 25 20:10:36 compute-0 ceph-mon[75144]: 3.15 scrub ok
Nov 25 20:10:37 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 20:10:37 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 20:10:37 compute-0 python3.9[110375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:10:37 compute-0 ceph-mon[75144]: 1.11 scrub starts
Nov 25 20:10:37 compute-0 ceph-mon[75144]: 1.11 scrub ok
Nov 25 20:10:37 compute-0 ceph-mon[75144]: pgmap v170: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v171: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:38 compute-0 ceph-mon[75144]: 3.12 scrub starts
Nov 25 20:10:38 compute-0 ceph-mon[75144]: 3.12 scrub ok
Nov 25 20:10:39 compute-0 python3.9[110529]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:10:39 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 25 20:10:39 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 25 20:10:39 compute-0 ceph-mon[75144]: pgmap v171: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:39 compute-0 ceph-mon[75144]: 5.1a scrub starts
Nov 25 20:10:39 compute-0 ceph-mon[75144]: 5.1a scrub ok
Nov 25 20:10:40 compute-0 python3.9[110722]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:10:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v172: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:41 compute-0 sshd-session[110225]: Connection closed by 192.168.122.30 port 34944
Nov 25 20:10:41 compute-0 sshd-session[110222]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:10:41 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 20:10:41 compute-0 systemd[1]: session-37.scope: Consumed 2.792s CPU time.
Nov 25 20:10:41 compute-0 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Nov 25 20:10:41 compute-0 systemd-logind[789]: Removed session 37.
Nov 25 20:10:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:41 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 20:10:41 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 20:10:41 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.10 scrub starts
Nov 25 20:10:41 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.10 scrub ok
Nov 25 20:10:41 compute-0 ceph-mon[75144]: pgmap v172: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:41 compute-0 ceph-mon[75144]: 5.f scrub starts
Nov 25 20:10:41 compute-0 ceph-mon[75144]: 5.f scrub ok
Nov 25 20:10:41 compute-0 ceph-mon[75144]: 1.10 scrub starts
Nov 25 20:10:41 compute-0 ceph-mon[75144]: 1.10 scrub ok
Nov 25 20:10:42 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 20:10:42 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 20:10:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v173: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:42 compute-0 ceph-mon[75144]: 5.c scrub starts
Nov 25 20:10:42 compute-0 ceph-mon[75144]: 5.c scrub ok
Nov 25 20:10:43 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Nov 25 20:10:43 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Nov 25 20:10:43 compute-0 ceph-mon[75144]: pgmap v173: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:43 compute-0 ceph-mon[75144]: 5.12 deep-scrub starts
Nov 25 20:10:43 compute-0 ceph-mon[75144]: 5.12 deep-scrub ok
Nov 25 20:10:44 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 20:10:44 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 20:10:44 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.12 scrub starts
Nov 25 20:10:44 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 1.12 scrub ok
Nov 25 20:10:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v174: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:44 compute-0 ceph-mon[75144]: 5.11 scrub starts
Nov 25 20:10:44 compute-0 ceph-mon[75144]: 5.11 scrub ok
Nov 25 20:10:44 compute-0 ceph-mon[75144]: 1.12 scrub starts
Nov 25 20:10:44 compute-0 ceph-mon[75144]: 1.12 scrub ok
Nov 25 20:10:45 compute-0 ceph-mon[75144]: pgmap v174: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v175: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:46 compute-0 sshd-session[110748]: Accepted publickey for zuul from 192.168.122.30 port 44258 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:10:46 compute-0 systemd-logind[789]: New session 38 of user zuul.
Nov 25 20:10:46 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 25 20:10:46 compute-0 sshd-session[110748]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:10:47 compute-0 ceph-mon[75144]: pgmap v175: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:48 compute-0 python3.9[110901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:10:48 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 25 20:10:48 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 25 20:10:48 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Nov 25 20:10:48 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Nov 25 20:10:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v176: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:48 compute-0 ceph-mon[75144]: 5.16 scrub starts
Nov 25 20:10:48 compute-0 ceph-mon[75144]: 5.16 scrub ok
Nov 25 20:10:48 compute-0 ceph-mon[75144]: 7.1b deep-scrub starts
Nov 25 20:10:48 compute-0 ceph-mon[75144]: 7.1b deep-scrub ok
Nov 25 20:10:49 compute-0 python3.9[111055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:10:49 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Nov 25 20:10:49 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Nov 25 20:10:49 compute-0 sudo[111084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:49 compute-0 sudo[111084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:49 compute-0 sudo[111084]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:49 compute-0 sudo[111115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:10:49 compute-0 sudo[111115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:49 compute-0 sudo[111115]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:49 compute-0 sudo[111174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:49 compute-0 sudo[111174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:49 compute-0 sudo[111174]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:49 compute-0 sudo[111211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:10:49 compute-0 sudo[111211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:49 compute-0 ceph-mon[75144]: pgmap v176: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:49 compute-0 ceph-mon[75144]: 5.13 deep-scrub starts
Nov 25 20:10:49 compute-0 ceph-mon[75144]: 5.13 deep-scrub ok
Nov 25 20:10:50 compute-0 sudo[111323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwwksprruhuqmymjcinszbtipqytoye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101449.6963906-40-234610405631940/AnsiballZ_setup.py'
Nov 25 20:10:50 compute-0 sudo[111323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:50 compute-0 sudo[111211]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:50 compute-0 python3.9[111325]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:10:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:10:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:10:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:10:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:10:50 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev b09957a9-a193-4530-8638-0d8f8ef7e46b does not exist
Nov 25 20:10:50 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev e01f93ba-bcc7-472b-a3cf-f453e41106d8 does not exist
Nov 25 20:10:50 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6bdcc1f1-b6f6-479c-9d39-3f1c6c3815cc does not exist
Nov 25 20:10:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:10:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:10:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:10:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:10:50 compute-0 sudo[111347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:50 compute-0 sudo[111347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:50 compute-0 sudo[111347]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:50 compute-0 sudo[111372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:10:50 compute-0 sudo[111372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:50 compute-0 sudo[111372]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:50 compute-0 sudo[111398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:50 compute-0 sudo[111398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:50 compute-0 sudo[111398]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:50 compute-0 sudo[111323]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:50 compute-0 sudo[111426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:10:50 compute-0 sudo[111426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v177: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:10:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:10:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.021007734 +0000 UTC m=+0.047971598 container create 226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:10:51 compute-0 systemd[1]: Started libpod-conmon-226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572.scope.
Nov 25 20:10:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.00484571 +0000 UTC m=+0.031809604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.117679513 +0000 UTC m=+0.144643477 container init 226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.131330987 +0000 UTC m=+0.158294891 container start 226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.135259803 +0000 UTC m=+0.162223707 container attach 226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:10:51 compute-0 elastic_cori[111553]: 167 167
Nov 25 20:10:51 compute-0 systemd[1]: libpod-226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572.scope: Deactivated successfully.
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.140465887 +0000 UTC m=+0.167429801 container died 226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:10:51 compute-0 sudo[111584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqyrdqosakeceupsyjkzytoxyuaauani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101449.6963906-40-234610405631940/AnsiballZ_dnf.py'
Nov 25 20:10:51 compute-0 sudo[111584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-17958f7a95aa8b19a5d73e5f690e466a137a332fe1f584da7123588c3b66d010-merged.mount: Deactivated successfully.
Nov 25 20:10:51 compute-0 podman[111513]: 2025-11-25 20:10:51.192845716 +0000 UTC m=+0.219809600 container remove 226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:10:51 compute-0 systemd[1]: libpod-conmon-226ef1042f2e1e5962966516d66f3a294ba5a031cba0de83d2082b89a7e49572.scope: Deactivated successfully.
Nov 25 20:10:51 compute-0 podman[111606]: 2025-11-25 20:10:51.4232448 +0000 UTC m=+0.081614185 container create 054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:10:51 compute-0 python3.9[111589]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:10:51 compute-0 systemd[1]: Started libpod-conmon-054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816.scope.
Nov 25 20:10:51 compute-0 podman[111606]: 2025-11-25 20:10:51.393717826 +0000 UTC m=+0.052087231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:10:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6795703973eab9e43a3c00d81f7973aeb2ff9214cc1aaca984f3f30e9f9ad5cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6795703973eab9e43a3c00d81f7973aeb2ff9214cc1aaca984f3f30e9f9ad5cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6795703973eab9e43a3c00d81f7973aeb2ff9214cc1aaca984f3f30e9f9ad5cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6795703973eab9e43a3c00d81f7973aeb2ff9214cc1aaca984f3f30e9f9ad5cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6795703973eab9e43a3c00d81f7973aeb2ff9214cc1aaca984f3f30e9f9ad5cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:51 compute-0 podman[111606]: 2025-11-25 20:10:51.529298096 +0000 UTC m=+0.187667521 container init 054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:10:51 compute-0 podman[111606]: 2025-11-25 20:10:51.542658272 +0000 UTC m=+0.201027637 container start 054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:10:51 compute-0 podman[111606]: 2025-11-25 20:10:51.546293699 +0000 UTC m=+0.204663144 container attach 054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:10:51 compute-0 ceph-mon[75144]: pgmap v177: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:52 compute-0 sharp_dewdney[111623]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:10:52 compute-0 sharp_dewdney[111623]: --> relative data size: 1.0
Nov 25 20:10:52 compute-0 sharp_dewdney[111623]: --> All data devices are unavailable
Nov 25 20:10:52 compute-0 sudo[111584]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:52 compute-0 systemd[1]: libpod-054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816.scope: Deactivated successfully.
Nov 25 20:10:52 compute-0 podman[111606]: 2025-11-25 20:10:52.69602448 +0000 UTC m=+1.354393835 container died 054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:10:52 compute-0 systemd[1]: libpod-054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816.scope: Consumed 1.102s CPU time.
Nov 25 20:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6795703973eab9e43a3c00d81f7973aeb2ff9214cc1aaca984f3f30e9f9ad5cf-merged.mount: Deactivated successfully.
Nov 25 20:10:52 compute-0 podman[111606]: 2025-11-25 20:10:52.759166598 +0000 UTC m=+1.417535963 container remove 054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:10:52 compute-0 systemd[1]: libpod-conmon-054303c5eaa4427b9e5b9dbe05f1406ea39cd07495caaa09b4bfe51af714c816.scope: Deactivated successfully.
Nov 25 20:10:52 compute-0 sudo[111426]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v178: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:52 compute-0 sudo[111689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:52 compute-0 sudo[111689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:52 compute-0 sudo[111689]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:52 compute-0 sudo[111714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:10:52 compute-0 sudo[111714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:52 compute-0 sudo[111714]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:53 compute-0 sudo[111763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:53 compute-0 sudo[111763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:53 compute-0 sudo[111763]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:53 compute-0 sudo[111816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:10:53 compute-0 sudo[111816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:53 compute-0 sudo[111926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kayldxmufwmmcsuwcdaqwozllncdqbzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101452.9184034-52-32660442058133/AnsiballZ_setup.py'
Nov 25 20:10:53 compute-0 sudo[111926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:53 compute-0 podman[111957]: 2025-11-25 20:10:53.430655587 +0000 UTC m=+0.047059903 container create aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 20:10:53 compute-0 systemd[1]: Started libpod-conmon-aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f.scope.
Nov 25 20:10:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:10:53 compute-0 podman[111957]: 2025-11-25 20:10:53.410126519 +0000 UTC m=+0.026530865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:10:53 compute-0 podman[111957]: 2025-11-25 20:10:53.517257828 +0000 UTC m=+0.133662154 container init aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:10:53 compute-0 podman[111957]: 2025-11-25 20:10:53.524229703 +0000 UTC m=+0.140634009 container start aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:10:53 compute-0 podman[111957]: 2025-11-25 20:10:53.528006265 +0000 UTC m=+0.144410571 container attach aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:10:53 compute-0 loving_banzai[111974]: 167 167
Nov 25 20:10:53 compute-0 systemd[1]: libpod-aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f.scope: Deactivated successfully.
Nov 25 20:10:53 compute-0 conmon[111974]: conmon aa6168605ec1fbc54bdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f.scope/container/memory.events
Nov 25 20:10:53 compute-0 python3.9[111935]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:10:53 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Nov 25 20:10:53 compute-0 podman[111979]: 2025-11-25 20:10:53.563501205 +0000 UTC m=+0.021680113 container died aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:10:53 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Nov 25 20:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a6185862c3e5e6a8a75c85e5d084a63397a841419290f2c68458aef75debbdc-merged.mount: Deactivated successfully.
Nov 25 20:10:53 compute-0 podman[111979]: 2025-11-25 20:10:53.600460908 +0000 UTC m=+0.058639836 container remove aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:10:53 compute-0 systemd[1]: libpod-conmon-aa6168605ec1fbc54bddd11ad38bebec9cad8feb6f989cdd1edc5e4e71c2911f.scope: Deactivated successfully.
Nov 25 20:10:53 compute-0 podman[112026]: 2025-11-25 20:10:53.790180189 +0000 UTC m=+0.050034841 container create 9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:10:53 compute-0 systemd[1]: Started libpod-conmon-9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97.scope.
Nov 25 20:10:53 compute-0 podman[112026]: 2025-11-25 20:10:53.767722034 +0000 UTC m=+0.027576716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:10:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131df8b5517b147856a5db13697b98728a0a6317a18b5bea6d03d3fd2085a2b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131df8b5517b147856a5db13697b98728a0a6317a18b5bea6d03d3fd2085a2b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131df8b5517b147856a5db13697b98728a0a6317a18b5bea6d03d3fd2085a2b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131df8b5517b147856a5db13697b98728a0a6317a18b5bea6d03d3fd2085a2b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:53 compute-0 podman[112026]: 2025-11-25 20:10:53.889222657 +0000 UTC m=+0.149077379 container init 9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:10:53 compute-0 podman[112026]: 2025-11-25 20:10:53.901977465 +0000 UTC m=+0.161832157 container start 9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:10:53 compute-0 podman[112026]: 2025-11-25 20:10:53.907165168 +0000 UTC m=+0.167019860 container attach 9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:10:53 compute-0 sudo[111926]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:54 compute-0 ceph-mon[75144]: pgmap v178: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:54 compute-0 ceph-mon[75144]: 3.1f deep-scrub starts
Nov 25 20:10:54 compute-0 ceph-mon[75144]: 3.1f deep-scrub ok
Nov 25 20:10:54 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 20:10:54 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 20:10:54 compute-0 strange_banzai[112056]: {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:     "0": [
Nov 25 20:10:54 compute-0 strange_banzai[112056]:         {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "devices": [
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "/dev/loop3"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             ],
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_name": "ceph_lv0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_size": "21470642176",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "name": "ceph_lv0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "tags": {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cluster_name": "ceph",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.crush_device_class": "",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.encrypted": "0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osd_id": "0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.type": "block",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.vdo": "0"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             },
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "type": "block",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "vg_name": "ceph_vg0"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:         }
Nov 25 20:10:54 compute-0 strange_banzai[112056]:     ],
Nov 25 20:10:54 compute-0 strange_banzai[112056]:     "1": [
Nov 25 20:10:54 compute-0 strange_banzai[112056]:         {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "devices": [
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "/dev/loop4"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             ],
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_name": "ceph_lv1",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_size": "21470642176",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "name": "ceph_lv1",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "tags": {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cluster_name": "ceph",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.crush_device_class": "",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.encrypted": "0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osd_id": "1",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.type": "block",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.vdo": "0"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             },
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "type": "block",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "vg_name": "ceph_vg1"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:         }
Nov 25 20:10:54 compute-0 strange_banzai[112056]:     ],
Nov 25 20:10:54 compute-0 strange_banzai[112056]:     "2": [
Nov 25 20:10:54 compute-0 strange_banzai[112056]:         {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "devices": [
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "/dev/loop5"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             ],
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_name": "ceph_lv2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_size": "21470642176",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "name": "ceph_lv2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "tags": {
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.cluster_name": "ceph",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.crush_device_class": "",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.encrypted": "0",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osd_id": "2",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.type": "block",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:                 "ceph.vdo": "0"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             },
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "type": "block",
Nov 25 20:10:54 compute-0 strange_banzai[112056]:             "vg_name": "ceph_vg2"
Nov 25 20:10:54 compute-0 strange_banzai[112056]:         }
Nov 25 20:10:54 compute-0 strange_banzai[112056]:     ]
Nov 25 20:10:54 compute-0 strange_banzai[112056]: }
Nov 25 20:10:54 compute-0 systemd[1]: libpod-9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97.scope: Deactivated successfully.
Nov 25 20:10:54 compute-0 podman[112026]: 2025-11-25 20:10:54.720913783 +0000 UTC m=+0.980768475 container died 9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-131df8b5517b147856a5db13697b98728a0a6317a18b5bea6d03d3fd2085a2b3-merged.mount: Deactivated successfully.
Nov 25 20:10:54 compute-0 sudo[112231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocydwypbbwvitkmnklggzibpshvzugyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101454.2248437-63-64229416151731/AnsiballZ_file.py'
Nov 25 20:10:54 compute-0 sudo[112231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:54 compute-0 podman[112026]: 2025-11-25 20:10:54.792146401 +0000 UTC m=+1.052001053 container remove 9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:10:54 compute-0 systemd[1]: libpod-conmon-9007ae7080384b656a2aef010546f06dc898af3b879dab1da5f2f029f80c1d97.scope: Deactivated successfully.
Nov 25 20:10:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v179: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:54 compute-0 sudo[111816]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:54 compute-0 sudo[112237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:54 compute-0 sudo[112237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:54 compute-0 sudo[112237]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:54 compute-0 sudo[112262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:10:54 compute-0 python3.9[112236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:54 compute-0 sudo[112262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:54 compute-0 sudo[112262]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:54 compute-0 sudo[112231]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:55 compute-0 ceph-mon[75144]: 5.19 scrub starts
Nov 25 20:10:55 compute-0 ceph-mon[75144]: 5.19 scrub ok
Nov 25 20:10:55 compute-0 sudo[112287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:55 compute-0 sudo[112287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:55 compute-0 sudo[112287]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:55 compute-0 sudo[112322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:10:55 compute-0 sudo[112322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:55 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 20:10:55 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.541287695 +0000 UTC m=+0.055017618 container create 1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_visvesvaraya, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:10:55 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 20:10:55 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 20:10:55 compute-0 systemd[1]: Started libpod-conmon-1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048.scope.
Nov 25 20:10:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.520881482 +0000 UTC m=+0.034611435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.623778964 +0000 UTC m=+0.137508927 container init 1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.63410203 +0000 UTC m=+0.147831953 container start 1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.637217692 +0000 UTC m=+0.150947635 container attach 1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:10:55 compute-0 pedantic_visvesvaraya[112511]: 167 167
Nov 25 20:10:55 compute-0 systemd[1]: libpod-1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048.scope: Deactivated successfully.
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.641316143 +0000 UTC m=+0.155046076 container died 1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbffeb4ed414c4b7c2d197ac97e2ab1d05865817a23aa685f97a30e7378b1842-merged.mount: Deactivated successfully.
Nov 25 20:10:55 compute-0 podman[112461]: 2025-11-25 20:10:55.68213792 +0000 UTC m=+0.195867833 container remove 1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:10:55 compute-0 systemd[1]: libpod-conmon-1146fef80e6b280e5facf9418684ccdd91f757bd28ff252b8a4e918801686048.scope: Deactivated successfully.
Nov 25 20:10:55 compute-0 sudo[112557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzfxtbxozhgtohkmggbssyptcgyubeiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101455.1955738-71-271351374993781/AnsiballZ_command.py'
Nov 25 20:10:55 compute-0 sudo[112557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:55 compute-0 podman[112567]: 2025-11-25 20:10:55.849004245 +0000 UTC m=+0.060634694 container create 4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:10:55 compute-0 python3.9[112561]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:10:55 compute-0 systemd[1]: Started libpod-conmon-4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5.scope.
Nov 25 20:10:55 compute-0 podman[112567]: 2025-11-25 20:10:55.82277665 +0000 UTC m=+0.034407109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:10:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20193e9cc5024c460f7b466349c51073ca5d92ac4b8a171a1922349dc08e8b1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20193e9cc5024c460f7b466349c51073ca5d92ac4b8a171a1922349dc08e8b1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20193e9cc5024c460f7b466349c51073ca5d92ac4b8a171a1922349dc08e8b1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20193e9cc5024c460f7b466349c51073ca5d92ac4b8a171a1922349dc08e8b1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:55 compute-0 podman[112567]: 2025-11-25 20:10:55.970174559 +0000 UTC m=+0.181805058 container init 4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:10:55 compute-0 podman[112567]: 2025-11-25 20:10:55.97798366 +0000 UTC m=+0.189614129 container start 4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:10:55 compute-0 podman[112567]: 2025-11-25 20:10:55.981935716 +0000 UTC m=+0.193566245 container attach 4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:10:56 compute-0 ceph-mon[75144]: pgmap v179: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:56 compute-0 ceph-mon[75144]: 4.12 scrub starts
Nov 25 20:10:56 compute-0 ceph-mon[75144]: 4.12 scrub ok
Nov 25 20:10:56 compute-0 ceph-mon[75144]: 3.17 scrub starts
Nov 25 20:10:56 compute-0 ceph-mon[75144]: 3.17 scrub ok
Nov 25 20:10:56 compute-0 sudo[112557]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:10:56 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 20:10:56 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 20:10:56 compute-0 sudo[112761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pavosxpcxdkldqyyhlizvrwbqfeouibs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101456.2063608-79-19940602826555/AnsiballZ_stat.py'
Nov 25 20:10:56 compute-0 sudo[112761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v180: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:10:56
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 25 20:10:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:10:56 compute-0 python3.9[112764]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:57 compute-0 sudo[112761]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:57 compute-0 ceph-mon[75144]: 5.1e scrub starts
Nov 25 20:10:57 compute-0 ceph-mon[75144]: 5.1e scrub ok
Nov 25 20:10:57 compute-0 ceph-mon[75144]: pgmap v180: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:57 compute-0 elastic_banzai[112585]: {
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "osd_id": 2,
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "type": "bluestore"
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:     },
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "osd_id": 1,
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "type": "bluestore"
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:     },
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "osd_id": 0,
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:         "type": "bluestore"
Nov 25 20:10:57 compute-0 elastic_banzai[112585]:     }
Nov 25 20:10:57 compute-0 elastic_banzai[112585]: }
Nov 25 20:10:57 compute-0 systemd[1]: libpod-4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5.scope: Deactivated successfully.
Nov 25 20:10:57 compute-0 podman[112567]: 2025-11-25 20:10:57.042139601 +0000 UTC m=+1.253770050 container died 4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:10:57 compute-0 systemd[1]: libpod-4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5.scope: Consumed 1.065s CPU time.
Nov 25 20:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-20193e9cc5024c460f7b466349c51073ca5d92ac4b8a171a1922349dc08e8b1b-merged.mount: Deactivated successfully.
Nov 25 20:10:57 compute-0 podman[112567]: 2025-11-25 20:10:57.095318933 +0000 UTC m=+1.306949372 container remove 4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:10:57 compute-0 systemd[1]: libpod-conmon-4f960786250bc709d42315d8c2757d2a95a521952feaed198f1732fc440ec2b5.scope: Deactivated successfully.
Nov 25 20:10:57 compute-0 sudo[112322]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:10:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:10:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:10:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:10:57 compute-0 sudo[112828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:10:57 compute-0 sudo[112828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:57 compute-0 sudo[112828]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:57 compute-0 sudo[112914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjwlbwnifvxkczqrorrrbqzjktnvmag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101456.2063608-79-19940602826555/AnsiballZ_file.py'
Nov 25 20:10:57 compute-0 sudo[112914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:57 compute-0 sudo[112878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:10:57 compute-0 sudo[112878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:10:57 compute-0 sudo[112878]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:57 compute-0 python3.9[112920]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:57 compute-0 sudo[112914]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:58 compute-0 sudo[113072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dprkgrgnvztosvttqtucfmgeqsyllscl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101457.7568374-91-237630792100299/AnsiballZ_stat.py'
Nov 25 20:10:58 compute-0 sudo[113072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:10:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:10:58 compute-0 python3.9[113074]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:58 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 20:10:58 compute-0 sudo[113072]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:58 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 20:10:58 compute-0 sudo[113150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimrhmctsbgrdwficyqtveswxnbvyuef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101457.7568374-91-237630792100299/AnsiballZ_file.py'
Nov 25 20:10:58 compute-0 sudo[113150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v181: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:58 compute-0 python3.9[113152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:10:58 compute-0 sudo[113150]: pam_unix(sudo:session): session closed for user root
Nov 25 20:10:59 compute-0 ceph-mon[75144]: 4.14 scrub starts
Nov 25 20:10:59 compute-0 ceph-mon[75144]: 4.14 scrub ok
Nov 25 20:10:59 compute-0 ceph-mon[75144]: pgmap v181: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:10:59 compute-0 sudo[113302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erxjmcydceuhbyrdyqrgogdqglecxouu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101459.1344323-104-190057777211724/AnsiballZ_ini_file.py'
Nov 25 20:10:59 compute-0 sudo[113302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:10:59 compute-0 python3.9[113304]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:10:59 compute-0 sudo[113302]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:00 compute-0 sudo[113454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlwcoqvymkcjmhwxnxnkupflhrmjikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101460.0722368-104-127670917561447/AnsiballZ_ini_file.py'
Nov 25 20:11:00 compute-0 sudo[113454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:00 compute-0 python3.9[113456]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:11:00 compute-0 sudo[113454]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v182: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:01 compute-0 sudo[113606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afrzfgvnxhlluftwrsyumocrsukgkfva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101460.8223772-104-10859376869008/AnsiballZ_ini_file.py'
Nov 25 20:11:01 compute-0 sudo[113606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:01 compute-0 python3.9[113608]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:11:01 compute-0 sudo[113606]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:01 compute-0 ceph-mon[75144]: pgmap v182: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:02 compute-0 sudo[113758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinqudxvatcwrkeasrpocllnzwjjqzrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101461.6331031-104-108676097003323/AnsiballZ_ini_file.py'
Nov 25 20:11:02 compute-0 sudo[113758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:11:02 compute-0 python3.9[113760]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:11:02 compute-0 sudo[113758]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Nov 25 20:11:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Nov 25 20:11:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v183: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:02 compute-0 sudo[113910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpvmrliynjfjraeexzsnbdqbsmmibwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101462.5170772-135-59209490866321/AnsiballZ_dnf.py'
Nov 25 20:11:02 compute-0 sudo[113910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:03 compute-0 python3.9[113912]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:11:03 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 20:11:03 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 20:11:03 compute-0 ceph-mon[75144]: 4.10 deep-scrub starts
Nov 25 20:11:03 compute-0 ceph-mon[75144]: 4.10 deep-scrub ok
Nov 25 20:11:03 compute-0 ceph-mon[75144]: pgmap v183: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:04 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 20:11:04 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 20:11:04 compute-0 sudo[113910]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v184: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:04 compute-0 ceph-mon[75144]: 6.d scrub starts
Nov 25 20:11:04 compute-0 ceph-mon[75144]: 6.d scrub ok
Nov 25 20:11:05 compute-0 sudo[114063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcxezgwzohivpizznztjarazdwkxkjse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101464.832022-146-63956992549511/AnsiballZ_setup.py'
Nov 25 20:11:05 compute-0 sudo[114063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:05 compute-0 python3.9[114065]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:11:05 compute-0 sudo[114063]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:05 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Nov 25 20:11:05 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Nov 25 20:11:05 compute-0 ceph-mon[75144]: 6.c scrub starts
Nov 25 20:11:05 compute-0 ceph-mon[75144]: 6.c scrub ok
Nov 25 20:11:05 compute-0 ceph-mon[75144]: pgmap v184: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:06 compute-0 sudo[114217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhejlmbvhhbsuderodfrffwqeudcnclz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101465.751619-154-70436436333587/AnsiballZ_stat.py'
Nov 25 20:11:06 compute-0 sudo[114217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:06 compute-0 python3.9[114219]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:11:06 compute-0 sudo[114217]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v185: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:06 compute-0 ceph-mon[75144]: 5.5 deep-scrub starts
Nov 25 20:11:06 compute-0 ceph-mon[75144]: 5.5 deep-scrub ok
Nov 25 20:11:06 compute-0 sudo[114369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyzaqklsrlbwlkltdoprvmtzmiihrpya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101466.6424537-163-206530073469982/AnsiballZ_stat.py'
Nov 25 20:11:07 compute-0 sudo[114369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:07 compute-0 python3.9[114371]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:11:07 compute-0 sudo[114369]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:07 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Nov 25 20:11:07 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Nov 25 20:11:07 compute-0 sudo[114521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkqrmfegibqfocaqbvxyjhnrrerklvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101467.510727-173-256829110752361/AnsiballZ_command.py'
Nov 25 20:11:07 compute-0 sudo[114521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:07 compute-0 ceph-mon[75144]: pgmap v185: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:07 compute-0 ceph-mon[75144]: 5.4 deep-scrub starts
Nov 25 20:11:07 compute-0 ceph-mon[75144]: 5.4 deep-scrub ok
Nov 25 20:11:08 compute-0 python3.9[114523]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:11:08 compute-0 sudo[114521]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v186: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:08 compute-0 sudo[114674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmshvvwfojyyfxspwwqpcdtavucxxbzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101468.407514-183-128628878852252/AnsiballZ_service_facts.py'
Nov 25 20:11:08 compute-0 sudo[114674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:09 compute-0 python3.9[114676]: ansible-service_facts Invoked
Nov 25 20:11:09 compute-0 network[114693]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:11:09 compute-0 network[114694]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:11:09 compute-0 network[114695]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:11:09 compute-0 ceph-mon[75144]: pgmap v186: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v187: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:11 compute-0 ceph-mon[75144]: pgmap v187: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:12 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Nov 25 20:11:12 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Nov 25 20:11:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v188: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:12 compute-0 ceph-mon[75144]: 5.3 deep-scrub starts
Nov 25 20:11:12 compute-0 ceph-mon[75144]: 5.3 deep-scrub ok
Nov 25 20:11:13 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 20:11:13 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 20:11:13 compute-0 ceph-mon[75144]: pgmap v188: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:13 compute-0 ceph-mon[75144]: 5.14 scrub starts
Nov 25 20:11:13 compute-0 ceph-mon[75144]: 5.14 scrub ok
Nov 25 20:11:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v189: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:15 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 20:11:15 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 20:11:15 compute-0 sudo[114674]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:15 compute-0 ceph-mon[75144]: pgmap v189: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:15 compute-0 ceph-mon[75144]: 4.f scrub starts
Nov 25 20:11:15 compute-0 ceph-mon[75144]: 4.f scrub ok
Nov 25 20:11:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:16 compute-0 sudo[114978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evibbrlidzxgxpfinyrnhfixtktetucc ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764101476.1014845-198-54577833718371/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764101476.1014845-198-54577833718371/args'
Nov 25 20:11:16 compute-0 sudo[114978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:16 compute-0 sudo[114978]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v190: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:17 compute-0 sudo[115145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prqnnjerdwkxufrbeqyocwtwubulxjlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101476.9218528-209-239814455939794/AnsiballZ_dnf.py'
Nov 25 20:11:17 compute-0 sudo[115145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:17 compute-0 python3.9[115147]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:11:17 compute-0 ceph-mon[75144]: pgmap v190: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:18 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 20:11:18 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 20:11:18 compute-0 sudo[115145]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v191: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:18 compute-0 ceph-mon[75144]: 6.e scrub starts
Nov 25 20:11:18 compute-0 ceph-mon[75144]: 6.e scrub ok
Nov 25 20:11:19 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 20:11:19 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 20:11:19 compute-0 sudo[115298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spbbymesnmdyfhvkykkzfsdcyofouuyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101479.057449-222-279450780879737/AnsiballZ_package_facts.py'
Nov 25 20:11:19 compute-0 sudo[115298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:19 compute-0 ceph-mon[75144]: pgmap v191: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:19 compute-0 ceph-mon[75144]: 5.15 scrub starts
Nov 25 20:11:19 compute-0 ceph-mon[75144]: 5.15 scrub ok
Nov 25 20:11:20 compute-0 python3.9[115300]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 20:11:20 compute-0 sudo[115298]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:20 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 25 20:11:20 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 25 20:11:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v192: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:20 compute-0 ceph-mon[75144]: 6.2 scrub starts
Nov 25 20:11:20 compute-0 ceph-mon[75144]: 6.2 scrub ok
Nov 25 20:11:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:21 compute-0 sudo[115450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heyftvqgkkniinwvrhkedirnqfeprouz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101480.7410562-232-53254636359360/AnsiballZ_stat.py'
Nov 25 20:11:21 compute-0 sudo[115450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:21 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 20:11:21 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 20:11:21 compute-0 python3.9[115452]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:21 compute-0 sudo[115450]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:21 compute-0 sudo[115528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvexmselymqaznzhbvwctzmldjyuzukq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101480.7410562-232-53254636359360/AnsiballZ_file.py'
Nov 25 20:11:21 compute-0 sudo[115528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:22 compute-0 ceph-mon[75144]: pgmap v192: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:22 compute-0 ceph-mon[75144]: 4.2 scrub starts
Nov 25 20:11:22 compute-0 ceph-mon[75144]: 4.2 scrub ok
Nov 25 20:11:22 compute-0 python3.9[115530]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:22 compute-0 sudo[115528]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:22 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 25 20:11:22 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 25 20:11:22 compute-0 sudo[115680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjklzwchtewebfbxtusariqfhagvlrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101482.3001876-244-113306071779241/AnsiballZ_stat.py'
Nov 25 20:11:22 compute-0 sudo[115680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:22 compute-0 python3.9[115682]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v193: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:22 compute-0 sudo[115680]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:23 compute-0 ceph-mon[75144]: 6.17 scrub starts
Nov 25 20:11:23 compute-0 ceph-mon[75144]: 6.17 scrub ok
Nov 25 20:11:23 compute-0 sudo[115758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdzmdmqtopjjxkjmtoypctvxqmzvpsrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101482.3001876-244-113306071779241/AnsiballZ_file.py'
Nov 25 20:11:23 compute-0 sudo[115758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:23 compute-0 python3.9[115760]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:23 compute-0 sudo[115758]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:23 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 20:11:23 compute-0 ceph-osd[89084]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 20:11:24 compute-0 ceph-mon[75144]: pgmap v193: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:24 compute-0 ceph-mon[75144]: 5.7 scrub starts
Nov 25 20:11:24 compute-0 ceph-mon[75144]: 5.7 scrub ok
Nov 25 20:11:24 compute-0 sudo[115910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqkbhvlwnvxdzxzyocppnzetegqelpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101483.81763-262-18088545771050/AnsiballZ_lineinfile.py'
Nov 25 20:11:24 compute-0 sudo[115910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:24 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Nov 25 20:11:24 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Nov 25 20:11:24 compute-0 python3.9[115912]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:24 compute-0 sudo[115910]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v194: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:25 compute-0 ceph-mon[75144]: 4.d deep-scrub starts
Nov 25 20:11:25 compute-0 ceph-mon[75144]: 4.d deep-scrub ok
Nov 25 20:11:25 compute-0 ceph-mon[75144]: pgmap v194: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:25 compute-0 sudo[116062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqpydkypqzbxgvnfaqxbmdgthfslsawy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101485.089238-277-154082141855734/AnsiballZ_setup.py'
Nov 25 20:11:25 compute-0 sudo[116062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:25 compute-0 python3.9[116064]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:11:26 compute-0 sudo[116062]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:26 compute-0 sudo[116146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gruexfcyxkspnxbvxmvnmscxcwkajshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101485.089238-277-154082141855734/AnsiballZ_systemd.py'
Nov 25 20:11:26 compute-0 sudo[116146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:11:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v195: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:27 compute-0 python3.9[116148]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:11:27 compute-0 sudo[116146]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:27 compute-0 sshd-session[110751]: Connection closed by 192.168.122.30 port 44258
Nov 25 20:11:27 compute-0 sshd-session[110748]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:11:27 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 20:11:27 compute-0 systemd[1]: session-38.scope: Consumed 27.132s CPU time.
Nov 25 20:11:27 compute-0 systemd-logind[789]: Session 38 logged out. Waiting for processes to exit.
Nov 25 20:11:27 compute-0 systemd-logind[789]: Removed session 38.
Nov 25 20:11:27 compute-0 ceph-mon[75144]: pgmap v195: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:28 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 25 20:11:28 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 25 20:11:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v196: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:28 compute-0 ceph-mon[75144]: 4.4 deep-scrub starts
Nov 25 20:11:28 compute-0 ceph-mon[75144]: 4.4 deep-scrub ok
Nov 25 20:11:29 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 20:11:29 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 20:11:29 compute-0 ceph-mon[75144]: pgmap v196: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:29 compute-0 ceph-mon[75144]: 4.9 scrub starts
Nov 25 20:11:29 compute-0 ceph-mon[75144]: 4.9 scrub ok
Nov 25 20:11:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v197: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:31 compute-0 ceph-mon[75144]: pgmap v197: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v198: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:33 compute-0 sshd-session[116175]: Accepted publickey for zuul from 192.168.122.30 port 53634 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:11:33 compute-0 systemd-logind[789]: New session 39 of user zuul.
Nov 25 20:11:33 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 25 20:11:33 compute-0 sshd-session[116175]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:11:34 compute-0 ceph-mon[75144]: pgmap v198: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:34 compute-0 sudo[116328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjotzclmlslxhqdnatyzyxmemaxfsawe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101493.7527592-22-160779805847346/AnsiballZ_file.py'
Nov 25 20:11:34 compute-0 sudo[116328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:34 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 25 20:11:34 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 25 20:11:34 compute-0 python3.9[116330]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:34 compute-0 sudo[116328]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v199: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:35 compute-0 ceph-mon[75144]: 6.b deep-scrub starts
Nov 25 20:11:35 compute-0 ceph-mon[75144]: 6.b deep-scrub ok
Nov 25 20:11:35 compute-0 sudo[116480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbyannufrqryhhofkbupguvleyoywjva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101494.7712584-34-98191582375251/AnsiballZ_stat.py'
Nov 25 20:11:35 compute-0 sudo[116480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:35 compute-0 python3.9[116482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:35 compute-0 sudo[116480]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:35 compute-0 sudo[116558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pavmntdvdkckowvfzprzmqhuravdmgqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101494.7712584-34-98191582375251/AnsiballZ_file.py'
Nov 25 20:11:35 compute-0 sudo[116558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:36 compute-0 ceph-mon[75144]: pgmap v199: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:36 compute-0 python3.9[116560]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:36 compute-0 sudo[116558]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:36 compute-0 sshd-session[116178]: Connection closed by 192.168.122.30 port 53634
Nov 25 20:11:36 compute-0 sshd-session[116175]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:11:36 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 20:11:36 compute-0 systemd[1]: session-39.scope: Consumed 1.875s CPU time.
Nov 25 20:11:36 compute-0 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Nov 25 20:11:36 compute-0 systemd-logind[789]: Removed session 39.
Nov 25 20:11:36 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 20:11:36 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 20:11:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v200: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:37 compute-0 ceph-mon[75144]: 4.5 scrub starts
Nov 25 20:11:37 compute-0 ceph-mon[75144]: 4.5 scrub ok
Nov 25 20:11:38 compute-0 ceph-mon[75144]: pgmap v200: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v201: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:39 compute-0 ceph-mon[75144]: pgmap v201: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v202: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:41 compute-0 sshd-session[116585]: Accepted publickey for zuul from 192.168.122.30 port 55832 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:11:41 compute-0 systemd-logind[789]: New session 40 of user zuul.
Nov 25 20:11:41 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 25 20:11:41 compute-0 sshd-session[116585]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:11:41 compute-0 ceph-mon[75144]: pgmap v202: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v203: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:42 compute-0 python3.9[116738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:11:43 compute-0 ceph-mon[75144]: pgmap v203: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:43 compute-0 sudo[116892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfdbqprehiqtvxokflricgtyqydtiyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101503.4202456-33-26015857414721/AnsiballZ_file.py'
Nov 25 20:11:43 compute-0 sudo[116892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:44 compute-0 python3.9[116894]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:44 compute-0 sudo[116892]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v204: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:44 compute-0 sudo[117067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vraksgmjgwnzddvfqyjorwsuxksznshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101504.4384336-41-208586379695970/AnsiballZ_stat.py'
Nov 25 20:11:45 compute-0 sudo[117067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:45 compute-0 python3.9[117069]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:45 compute-0 sudo[117067]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:45 compute-0 sudo[117145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teicvidkiwnkfonxequyngsctiiixdtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101504.4384336-41-208586379695970/AnsiballZ_file.py'
Nov 25 20:11:45 compute-0 sudo[117145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:45 compute-0 python3.9[117147]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.o9kz933c recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:45 compute-0 sudo[117145]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:45 compute-0 ceph-mon[75144]: pgmap v204: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:46 compute-0 sudo[117297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvhumwtcowebhvlhluqmjxsfuxrkstt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101506.07176-61-76584922461761/AnsiballZ_stat.py'
Nov 25 20:11:46 compute-0 sudo[117297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:46 compute-0 python3.9[117299]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:46 compute-0 sudo[117297]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v205: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:46 compute-0 systemd[76777]: Created slice User Background Tasks Slice.
Nov 25 20:11:46 compute-0 systemd[76777]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 20:11:46 compute-0 sudo[117376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kglmesevfsnmdntjnaredbmeumlqrrzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101506.07176-61-76584922461761/AnsiballZ_file.py'
Nov 25 20:11:46 compute-0 sudo[117376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:46 compute-0 systemd[76777]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 20:11:47 compute-0 python3.9[117378]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ijneslku recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:47 compute-0 sudo[117376]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:47 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 25 20:11:47 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 25 20:11:47 compute-0 sudo[117528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnnkksxwcvulcthfwhmuxmikqxdotff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101507.327131-74-279519126993182/AnsiballZ_file.py'
Nov 25 20:11:47 compute-0 sudo[117528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:47 compute-0 python3.9[117530]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:11:47 compute-0 sudo[117528]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:47 compute-0 ceph-mon[75144]: pgmap v205: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:47 compute-0 ceph-mon[75144]: 6.4 deep-scrub starts
Nov 25 20:11:47 compute-0 ceph-mon[75144]: 6.4 deep-scrub ok
Nov 25 20:11:48 compute-0 sudo[117680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajemvfxgrtiqsvhqjfeypxaqygblwtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101508.0487263-82-164535513857025/AnsiballZ_stat.py'
Nov 25 20:11:48 compute-0 sudo[117680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:48 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 20:11:48 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 20:11:48 compute-0 python3.9[117682]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:48 compute-0 sudo[117680]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:48 compute-0 sudo[117758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djjhzvaktuhmjflnkpvacrjkgfelhmqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101508.0487263-82-164535513857025/AnsiballZ_file.py'
Nov 25 20:11:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v206: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:48 compute-0 sudo[117758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:48 compute-0 ceph-mon[75144]: 4.7 scrub starts
Nov 25 20:11:48 compute-0 ceph-mon[75144]: 4.7 scrub ok
Nov 25 20:11:49 compute-0 python3.9[117760]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:11:49 compute-0 sudo[117758]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:49 compute-0 sudo[117910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqfujhcghxwpbjhgyuigzwdoazmenrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101509.2150943-82-121040853630587/AnsiballZ_stat.py'
Nov 25 20:11:49 compute-0 sudo[117910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:49 compute-0 python3.9[117912]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:49 compute-0 sudo[117910]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:49 compute-0 ceph-mon[75144]: pgmap v206: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:50 compute-0 sudo[117988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-incppjpdkjxgvhhfiipfnqhmrrbbverp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101509.2150943-82-121040853630587/AnsiballZ_file.py'
Nov 25 20:11:50 compute-0 sudo[117988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:50 compute-0 python3.9[117990]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:11:50 compute-0 sudo[117988]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:50 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 25 20:11:50 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 25 20:11:50 compute-0 sudo[118140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmusxbopzuslmrwavcnyzowcgchbhzna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101510.4402878-105-60403216501190/AnsiballZ_file.py'
Nov 25 20:11:50 compute-0 sudo[118140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v207: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:50 compute-0 ceph-mon[75144]: 6.1e scrub starts
Nov 25 20:11:50 compute-0 ceph-mon[75144]: 6.1e scrub ok
Nov 25 20:11:51 compute-0 python3.9[118142]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:51 compute-0 sudo[118140]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:51 compute-0 sudo[118292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxvezifsixsfcjraoalekskdkkvqkhwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101511.2541864-113-117942769220842/AnsiballZ_stat.py'
Nov 25 20:11:51 compute-0 sudo[118292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:51 compute-0 python3.9[118294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:51 compute-0 sudo[118292]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:51 compute-0 ceph-mon[75144]: pgmap v207: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:52 compute-0 sudo[118370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnsjsiznsgvtdlnbpvtqiazcspeoazxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101511.2541864-113-117942769220842/AnsiballZ_file.py'
Nov 25 20:11:52 compute-0 sudo[118370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:52 compute-0 python3.9[118372]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:52 compute-0 sudo[118370]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:52 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 20:11:52 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 20:11:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v208: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:52 compute-0 ceph-mon[75144]: 4.8 scrub starts
Nov 25 20:11:52 compute-0 ceph-mon[75144]: 4.8 scrub ok
Nov 25 20:11:53 compute-0 sudo[118522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjjjlrzkqjefmcgwrckshlkuurcqsvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101512.643901-125-137114207738528/AnsiballZ_stat.py'
Nov 25 20:11:53 compute-0 sudo[118522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:53 compute-0 python3.9[118524]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:53 compute-0 sudo[118522]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:53 compute-0 sudo[118600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlnqppqvsarpzlqtrbpebpjysvjocxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101512.643901-125-137114207738528/AnsiballZ_file.py'
Nov 25 20:11:53 compute-0 sudo[118600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:53 compute-0 python3.9[118602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:53 compute-0 sudo[118600]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:53 compute-0 ceph-mon[75144]: pgmap v208: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:54 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 20:11:54 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 20:11:54 compute-0 sudo[118752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzbexvwdmxckfijxohlufnmazlkbfoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101514.0165327-137-249233630577759/AnsiballZ_systemd.py'
Nov 25 20:11:54 compute-0 sudo[118752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v209: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:54 compute-0 python3.9[118754]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:11:54 compute-0 systemd[1]: Reloading.
Nov 25 20:11:54 compute-0 ceph-mon[75144]: 6.1 scrub starts
Nov 25 20:11:54 compute-0 ceph-mon[75144]: 6.1 scrub ok
Nov 25 20:11:55 compute-0 systemd-rc-local-generator[118784]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:11:55 compute-0 systemd-sysv-generator[118787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:11:55 compute-0 sudo[118752]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:55 compute-0 sudo[118942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clrqdgijvkoqpdnqemyuoedpuzgfmjdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101515.581363-145-23971499800145/AnsiballZ_stat.py'
Nov 25 20:11:55 compute-0 sudo[118942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:56 compute-0 ceph-mon[75144]: pgmap v209: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:11:56 compute-0 python3.9[118944]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:56 compute-0 sudo[118942]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:56 compute-0 sudo[119020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxvolwfqyqebxrbzmnofatukrgowwgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101515.581363-145-23971499800145/AnsiballZ_file.py'
Nov 25 20:11:56 compute-0 sudo[119020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:56 compute-0 python3.9[119022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:56 compute-0 sudo[119020]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v210: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:11:56
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms', 'cephfs.cephfs.data']
Nov 25 20:11:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:11:57 compute-0 sudo[119195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snhnmytrnemgzzoqrowhdqnelhkgbveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101517.033268-157-160802164650232/AnsiballZ_stat.py'
Nov 25 20:11:57 compute-0 sudo[119147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:11:57 compute-0 sudo[119195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:57 compute-0 sudo[119147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:57 compute-0 sudo[119147]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:57 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 25 20:11:57 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 25 20:11:57 compute-0 sudo[119200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:11:57 compute-0 sudo[119200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:57 compute-0 sudo[119200]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:57 compute-0 python3.9[119198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:57 compute-0 sudo[119225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:11:57 compute-0 sudo[119225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:57 compute-0 sudo[119225]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:57 compute-0 sudo[119195]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:57 compute-0 sudo[119252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:11:57 compute-0 sudo[119252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:57 compute-0 sudo[119365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnenwdunbbdhfrrrafjlcnvhkblnbdiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101517.033268-157-160802164650232/AnsiballZ_file.py'
Nov 25 20:11:57 compute-0 sudo[119365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:58 compute-0 ceph-mon[75144]: pgmap v210: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:58 compute-0 ceph-mon[75144]: 6.1d scrub starts
Nov 25 20:11:58 compute-0 ceph-mon[75144]: 6.1d scrub ok
Nov 25 20:11:58 compute-0 python3.9[119367]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:58 compute-0 sudo[119365]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:58 compute-0 sudo[119252]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:11:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:11:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:11:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:11:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:11:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:11:58 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev ca6f6819-c7cc-4eb3-b0ce-828abbb45ea7 does not exist
Nov 25 20:11:58 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev cb5c8f3a-922d-4102-9554-f06a3e6905d4 does not exist
Nov 25 20:11:58 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 73c5e89b-5de5-456c-86d2-9da4daf0176e does not exist
Nov 25 20:11:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:11:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:11:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:11:58 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:11:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:11:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:11:58 compute-0 sudo[119409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:11:58 compute-0 sudo[119409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:58 compute-0 sudo[119409]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:58 compute-0 sudo[119463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:11:58 compute-0 sudo[119463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:58 compute-0 sudo[119463]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:58 compute-0 sudo[119514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:11:58 compute-0 sudo[119514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:58 compute-0 sudo[119514]: pam_unix(sudo:session): session closed for user root
Nov 25 20:11:58 compute-0 sudo[119559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:11:58 compute-0 sudo[119559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:11:58 compute-0 sudo[119634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtsujzcuqebmysifqlbiyheqcbwlqoic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101518.2509542-169-250654391770855/AnsiballZ_systemd.py'
Nov 25 20:11:58 compute-0 sudo[119634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:11:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v211: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:11:58 compute-0 podman[119676]: 2025-11-25 20:11:58.869762471 +0000 UTC m=+0.058175967 container create b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:11:58 compute-0 systemd[1]: Started libpod-conmon-b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64.scope.
Nov 25 20:11:58 compute-0 python3.9[119636]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:11:58 compute-0 podman[119676]: 2025-11-25 20:11:58.844594787 +0000 UTC m=+0.033008373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:11:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:11:58 compute-0 systemd[1]: Reloading.
Nov 25 20:11:58 compute-0 podman[119676]: 2025-11-25 20:11:58.960220199 +0000 UTC m=+0.148633745 container init b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:11:58 compute-0 podman[119676]: 2025-11-25 20:11:58.971175939 +0000 UTC m=+0.159589475 container start b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:11:58 compute-0 podman[119676]: 2025-11-25 20:11:58.975463742 +0000 UTC m=+0.163877298 container attach b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:11:58 compute-0 epic_margulis[119694]: 167 167
Nov 25 20:11:59 compute-0 systemd-sysv-generator[119731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:11:59 compute-0 systemd-rc-local-generator[119725]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:11:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:11:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:11:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:11:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:11:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:11:59 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:11:59 compute-0 podman[119699]: 2025-11-25 20:11:59.034654185 +0000 UTC m=+0.038207290 container died b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:11:59 compute-0 systemd[1]: libpod-b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64.scope: Deactivated successfully.
Nov 25 20:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc138f8fd25a17e14ff8b87e4b7ff56581d625f45fff823f85eecaf499d4fcd0-merged.mount: Deactivated successfully.
Nov 25 20:11:59 compute-0 podman[119699]: 2025-11-25 20:11:59.273977163 +0000 UTC m=+0.277530268 container remove b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:11:59 compute-0 systemd[1]: libpod-conmon-b213998284838509e00c80f0f123939b82dc51ef6d80f39d5dc7c00436fe6f64.scope: Deactivated successfully.
Nov 25 20:11:59 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 25 20:11:59 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 25 20:11:59 compute-0 podman[119757]: 2025-11-25 20:11:59.530513187 +0000 UTC m=+0.063450886 container create aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:11:59 compute-0 systemd[1]: Started libpod-conmon-aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e.scope.
Nov 25 20:11:59 compute-0 podman[119757]: 2025-11-25 20:11:59.506647647 +0000 UTC m=+0.039585336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:11:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e244bf14f103f372cb6050a9591cc467e9c2f5805c362224e728e3c5597e976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e244bf14f103f372cb6050a9591cc467e9c2f5805c362224e728e3c5597e976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e244bf14f103f372cb6050a9591cc467e9c2f5805c362224e728e3c5597e976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e244bf14f103f372cb6050a9591cc467e9c2f5805c362224e728e3c5597e976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e244bf14f103f372cb6050a9591cc467e9c2f5805c362224e728e3c5597e976/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:11:59 compute-0 podman[119757]: 2025-11-25 20:11:59.665608124 +0000 UTC m=+0.198545883 container init aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:11:59 compute-0 podman[119757]: 2025-11-25 20:11:59.679767288 +0000 UTC m=+0.212704957 container start aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:11:59 compute-0 podman[119757]: 2025-11-25 20:11:59.683586809 +0000 UTC m=+0.216524558 container attach aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:12:00 compute-0 ceph-mon[75144]: pgmap v211: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:00 compute-0 ceph-mon[75144]: 6.1c scrub starts
Nov 25 20:12:00 compute-0 ceph-mon[75144]: 6.1c scrub ok
Nov 25 20:12:00 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 20:12:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 20:12:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 20:12:00 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 20:12:00 compute-0 sudo[119634]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:00 compute-0 loving_robinson[119774]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:12:00 compute-0 loving_robinson[119774]: --> relative data size: 1.0
Nov 25 20:12:00 compute-0 loving_robinson[119774]: --> All data devices are unavailable
Nov 25 20:12:00 compute-0 systemd[1]: libpod-aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e.scope: Deactivated successfully.
Nov 25 20:12:00 compute-0 podman[119757]: 2025-11-25 20:12:00.832919055 +0000 UTC m=+1.365856754 container died aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:12:00 compute-0 systemd[1]: libpod-aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e.scope: Consumed 1.086s CPU time.
Nov 25 20:12:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v212: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e244bf14f103f372cb6050a9591cc467e9c2f5805c362224e728e3c5597e976-merged.mount: Deactivated successfully.
Nov 25 20:12:00 compute-0 podman[119757]: 2025-11-25 20:12:00.890073274 +0000 UTC m=+1.423010943 container remove aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:12:00 compute-0 systemd[1]: libpod-conmon-aca9351666daf2af275e632f331f66c33d8c075387ac37598738855931b74b9e.scope: Deactivated successfully.
Nov 25 20:12:00 compute-0 sudo[119559]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:00 compute-0 sudo[119918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:12:00 compute-0 sudo[119918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:00 compute-0 sudo[119918]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:01 compute-0 ceph-mon[75144]: pgmap v212: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:01 compute-0 sudo[119968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:12:01 compute-0 sudo[119968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:01 compute-0 sudo[119968]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:01 compute-0 sudo[120019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:12:01 compute-0 sudo[120019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:01 compute-0 sudo[120019]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:01 compute-0 sudo[120044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:12:01 compute-0 sudo[120044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:01 compute-0 python3.9[120016]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:12:01 compute-0 network[120085]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:12:01 compute-0 network[120086]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:12:01 compute-0 network[120089]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:12:01 compute-0 podman[120131]: 2025-11-25 20:12:01.539131962 +0000 UTC m=+0.046066907 container create d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jang, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:12:01 compute-0 podman[120131]: 2025-11-25 20:12:01.517460869 +0000 UTC m=+0.024395834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:12:02 compute-0 systemd[1]: Started libpod-conmon-d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819.scope.
Nov 25 20:12:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:12:02 compute-0 podman[120131]: 2025-11-25 20:12:02.140013927 +0000 UTC m=+0.646948892 container init d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:12:02 compute-0 podman[120131]: 2025-11-25 20:12:02.151502021 +0000 UTC m=+0.658436956 container start d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jang, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:12:02 compute-0 podman[120131]: 2025-11-25 20:12:02.155026454 +0000 UTC m=+0.661961399 container attach d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:12:02 compute-0 fervent_jang[120148]: 167 167
Nov 25 20:12:02 compute-0 systemd[1]: libpod-d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819.scope: Deactivated successfully.
Nov 25 20:12:02 compute-0 podman[120131]: 2025-11-25 20:12:02.162510312 +0000 UTC m=+0.669445257 container died d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b5fd87afcf2eb77f5539e3b7375c97e91f0595568f4bf5fa4f0e1b578e9be04-merged.mount: Deactivated successfully.
Nov 25 20:12:02 compute-0 podman[120131]: 2025-11-25 20:12:02.202159118 +0000 UTC m=+0.709094053 container remove d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jang, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:12:02 compute-0 systemd[1]: libpod-conmon-d9a761e095633b501f154c94c618e6055851f4c9b85588f52bbc300d4a608819.scope: Deactivated successfully.
Nov 25 20:12:02 compute-0 podman[120183]: 2025-11-25 20:12:02.393517841 +0000 UTC m=+0.047898766 container create 25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:12:02 compute-0 systemd[1]: Started libpod-conmon-25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd.scope.
Nov 25 20:12:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 20:12:02 compute-0 podman[120183]: 2025-11-25 20:12:02.370311858 +0000 UTC m=+0.024692813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:12:02 compute-0 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 20:12:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90015dbfb3dfae83e858735e7f7b0f73a0a93d041b88b01febf05e7b9df009b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90015dbfb3dfae83e858735e7f7b0f73a0a93d041b88b01febf05e7b9df009b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90015dbfb3dfae83e858735e7f7b0f73a0a93d041b88b01febf05e7b9df009b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90015dbfb3dfae83e858735e7f7b0f73a0a93d041b88b01febf05e7b9df009b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:02 compute-0 podman[120183]: 2025-11-25 20:12:02.494307012 +0000 UTC m=+0.148687947 container init 25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:12:02 compute-0 podman[120183]: 2025-11-25 20:12:02.506919964 +0000 UTC m=+0.161300889 container start 25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:12:02 compute-0 podman[120183]: 2025-11-25 20:12:02.520446882 +0000 UTC m=+0.174827837 container attach 25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:12:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v213: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:02 compute-0 ceph-mon[75144]: 6.6 scrub starts
Nov 25 20:12:02 compute-0 ceph-mon[75144]: 6.6 scrub ok
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]: {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:     "0": [
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:         {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "devices": [
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "/dev/loop3"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             ],
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_name": "ceph_lv0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_size": "21470642176",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "name": "ceph_lv0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "tags": {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cluster_name": "ceph",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.crush_device_class": "",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.encrypted": "0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osd_id": "0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.type": "block",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.vdo": "0"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             },
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "type": "block",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "vg_name": "ceph_vg0"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:         }
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:     ],
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:     "1": [
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:         {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "devices": [
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "/dev/loop4"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             ],
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_name": "ceph_lv1",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_size": "21470642176",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "name": "ceph_lv1",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "tags": {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cluster_name": "ceph",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.crush_device_class": "",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.encrypted": "0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osd_id": "1",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.type": "block",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.vdo": "0"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             },
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "type": "block",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "vg_name": "ceph_vg1"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:         }
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:     ],
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:     "2": [
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:         {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "devices": [
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "/dev/loop5"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             ],
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_name": "ceph_lv2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_size": "21470642176",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "name": "ceph_lv2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "tags": {
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.cluster_name": "ceph",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.crush_device_class": "",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.encrypted": "0",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osd_id": "2",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.type": "block",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:                 "ceph.vdo": "0"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             },
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "type": "block",
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:             "vg_name": "ceph_vg2"
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:         }
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]:     ]
Nov 25 20:12:03 compute-0 pedantic_chaplygin[120205]: }
Nov 25 20:12:03 compute-0 podman[120183]: 2025-11-25 20:12:03.373973108 +0000 UTC m=+1.028354073 container died 25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:12:03 compute-0 systemd[1]: libpod-25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd.scope: Deactivated successfully.
Nov 25 20:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-90015dbfb3dfae83e858735e7f7b0f73a0a93d041b88b01febf05e7b9df009b8-merged.mount: Deactivated successfully.
Nov 25 20:12:03 compute-0 podman[120183]: 2025-11-25 20:12:03.449272236 +0000 UTC m=+1.103653161 container remove 25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:12:03 compute-0 systemd[1]: libpod-conmon-25b5d8fae07573c9cfae976f22420e6d1ce16d2e2a6fd4c042aa9e5fc688e8fd.scope: Deactivated successfully.
Nov 25 20:12:03 compute-0 sudo[120044]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:03 compute-0 sudo[120271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:12:03 compute-0 sudo[120271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:03 compute-0 sudo[120271]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:03 compute-0 sudo[120296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:12:03 compute-0 sudo[120296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:03 compute-0 sudo[120296]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:03 compute-0 sudo[120321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:12:03 compute-0 sudo[120321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:03 compute-0 sudo[120321]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:03 compute-0 sudo[120346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:12:03 compute-0 sudo[120346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:03 compute-0 ceph-mon[75144]: pgmap v213: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.211962314 +0000 UTC m=+0.065639354 container create 972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:12:04 compute-0 systemd[1]: Started libpod-conmon-972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6.scope.
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.190860667 +0000 UTC m=+0.044537717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:12:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.311434741 +0000 UTC m=+0.165111771 container init 972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.323560221 +0000 UTC m=+0.177237251 container start 972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.32773655 +0000 UTC m=+0.181413640 container attach 972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:12:04 compute-0 fervent_bassi[120427]: 167 167
Nov 25 20:12:04 compute-0 systemd[1]: libpod-972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6.scope: Deactivated successfully.
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.330605186 +0000 UTC m=+0.184282266 container died 972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-61adb0700602e330c5ae62b159c8c4bdf4c79b2837ac2a37976c05c681b26cb5-merged.mount: Deactivated successfully.
Nov 25 20:12:04 compute-0 podman[120411]: 2025-11-25 20:12:04.382088976 +0000 UTC m=+0.235765976 container remove 972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:12:04 compute-0 systemd[1]: libpod-conmon-972b90e7c3e46a4733e9a30f58ae46fe597a9bb410541100c7aeb5764a94b7e6.scope: Deactivated successfully.
Nov 25 20:12:04 compute-0 podman[120453]: 2025-11-25 20:12:04.59470208 +0000 UTC m=+0.054189883 container create 87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:12:04 compute-0 systemd[1]: Started libpod-conmon-87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c.scope.
Nov 25 20:12:04 compute-0 podman[120453]: 2025-11-25 20:12:04.569155865 +0000 UTC m=+0.028643748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:12:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c763506881d2f4203758a65a318ff8afc02a29747481a22ea942065b8e697ea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c763506881d2f4203758a65a318ff8afc02a29747481a22ea942065b8e697ea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c763506881d2f4203758a65a318ff8afc02a29747481a22ea942065b8e697ea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c763506881d2f4203758a65a318ff8afc02a29747481a22ea942065b8e697ea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:12:04 compute-0 podman[120453]: 2025-11-25 20:12:04.717546504 +0000 UTC m=+0.177034347 container init 87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_newton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:12:04 compute-0 podman[120453]: 2025-11-25 20:12:04.729511389 +0000 UTC m=+0.188999232 container start 87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:12:04 compute-0 podman[120453]: 2025-11-25 20:12:04.733577026 +0000 UTC m=+0.193064889 container attach 87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_newton, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:12:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v214: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:05 compute-0 beautiful_newton[120474]: {
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "osd_id": 2,
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "type": "bluestore"
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:     },
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "osd_id": 1,
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "type": "bluestore"
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:     },
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "osd_id": 0,
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:         "type": "bluestore"
Nov 25 20:12:05 compute-0 beautiful_newton[120474]:     }
Nov 25 20:12:05 compute-0 beautiful_newton[120474]: }
Nov 25 20:12:05 compute-0 systemd[1]: libpod-87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c.scope: Deactivated successfully.
Nov 25 20:12:05 compute-0 podman[120453]: 2025-11-25 20:12:05.75859132 +0000 UTC m=+1.218079123 container died 87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:12:05 compute-0 systemd[1]: libpod-87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c.scope: Consumed 1.037s CPU time.
Nov 25 20:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c763506881d2f4203758a65a318ff8afc02a29747481a22ea942065b8e697ea6-merged.mount: Deactivated successfully.
Nov 25 20:12:05 compute-0 podman[120453]: 2025-11-25 20:12:05.808443857 +0000 UTC m=+1.267931660 container remove 87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:12:05 compute-0 systemd[1]: libpod-conmon-87350e5a4b0d019aa30868ef447c4e7a14259acf4523ae06e2ddee5737ace58c.scope: Deactivated successfully.
Nov 25 20:12:05 compute-0 sudo[120346]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:12:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:12:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:12:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:12:05 compute-0 sudo[120581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:12:05 compute-0 sudo[120581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:05 compute-0 sudo[120581]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:05 compute-0 ceph-mon[75144]: pgmap v214: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:12:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:12:06 compute-0 sudo[120606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:12:06 compute-0 sudo[120606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:12:06 compute-0 sudo[120606]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:06 compute-0 sudo[120756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzdgehmiggcpvudtwtnzylqnkhungqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101526.2808552-195-134775098931715/AnsiballZ_stat.py'
Nov 25 20:12:06 compute-0 sudo[120756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v215: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:06 compute-0 python3.9[120758]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:06 compute-0 sudo[120756]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:07 compute-0 sudo[120834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpvrmkodmtrjcnipyfurunghhsbtfxcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101526.2808552-195-134775098931715/AnsiballZ_file.py'
Nov 25 20:12:07 compute-0 sudo[120834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:07 compute-0 python3.9[120836]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:07 compute-0 sudo[120834]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:07 compute-0 ceph-mon[75144]: pgmap v215: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:08 compute-0 sudo[120986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysfbfkgxgsdsxtqssowpddiceldihcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101527.6296403-208-10426925884806/AnsiballZ_file.py'
Nov 25 20:12:08 compute-0 sudo[120986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:08 compute-0 python3.9[120988]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:08 compute-0 sudo[120986]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:08 compute-0 sudo[121138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ennirhutltemakzbftbmqiddvlxoxftj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101528.424314-216-253359862826787/AnsiballZ_stat.py'
Nov 25 20:12:08 compute-0 sudo[121138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v216: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:09 compute-0 python3.9[121140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:09 compute-0 sudo[121138]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:09 compute-0 sudo[121216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvyazhklivfqzbdgsvjvssmajjkfdohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101528.424314-216-253359862826787/AnsiballZ_file.py'
Nov 25 20:12:09 compute-0 sudo[121216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:09 compute-0 python3.9[121218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:09 compute-0 sudo[121216]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:10 compute-0 ceph-mon[75144]: pgmap v216: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:10 compute-0 sudo[121368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdyohybeucvqyumcoeykaejwlbdfwffy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101529.9904501-231-129209264402606/AnsiballZ_timezone.py'
Nov 25 20:12:10 compute-0 sudo[121368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:10 compute-0 python3.9[121370]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 20:12:10 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 20:12:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v217: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:10 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 20:12:10 compute-0 sudo[121368]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:11 compute-0 sudo[121524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywrriegoxcloursohgfiznpaihcorqfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101531.2113104-240-186196483739642/AnsiballZ_file.py'
Nov 25 20:12:11 compute-0 sudo[121524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:11 compute-0 python3.9[121526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:11 compute-0 sudo[121524]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:12 compute-0 ceph-mon[75144]: pgmap v217: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:12 compute-0 sudo[121676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwwveifxvtoajnpailesuujqpukzxslx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101531.9609609-248-108094491900023/AnsiballZ_stat.py'
Nov 25 20:12:12 compute-0 sudo[121676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:12 compute-0 python3.9[121678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:12 compute-0 sudo[121676]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:12 compute-0 sudo[121754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoxuhwynfhkfgscekxhdkpjfsnntiywz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101531.9609609-248-108094491900023/AnsiballZ_file.py'
Nov 25 20:12:12 compute-0 sudo[121754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v218: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:13 compute-0 python3.9[121756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:13 compute-0 sudo[121754]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:13 compute-0 sudo[121906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxjtlznbqhytchohaoeljufalcgbpmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101533.2526987-260-125639381378320/AnsiballZ_stat.py'
Nov 25 20:12:13 compute-0 sudo[121906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:13 compute-0 python3.9[121908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:13 compute-0 sudo[121906]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:14 compute-0 ceph-mon[75144]: pgmap v218: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:14 compute-0 sudo[121984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prbwinvungsboszbzitpxsbvuthykozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101533.2526987-260-125639381378320/AnsiballZ_file.py'
Nov 25 20:12:14 compute-0 sudo[121984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:14 compute-0 python3.9[121986]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.knv9r1hq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:14 compute-0 sudo[121984]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v219: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:15 compute-0 sudo[122136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsaloqtpretkohnbghdhfxxlonroktgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101534.7223806-272-221887506317063/AnsiballZ_stat.py'
Nov 25 20:12:15 compute-0 sudo[122136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:15 compute-0 python3.9[122138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:15 compute-0 sudo[122136]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:15 compute-0 sudo[122214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjgnedsvzpgmhuhdtmgahaofdcdekby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101534.7223806-272-221887506317063/AnsiballZ_file.py'
Nov 25 20:12:15 compute-0 sudo[122214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:15 compute-0 python3.9[122216]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:15 compute-0 sudo[122214]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:16 compute-0 ceph-mon[75144]: pgmap v219: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:16 compute-0 sudo[122366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zftwxyahmzeaoedxngcfavmcyadddkjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101536.0353858-285-83440929795808/AnsiballZ_command.py'
Nov 25 20:12:16 compute-0 sudo[122366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:16 compute-0 python3.9[122368]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:12:16 compute-0 sudo[122366]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v220: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:17 compute-0 ceph-mon[75144]: pgmap v220: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:17 compute-0 sudo[122519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aomymhosecfjnhroqmfqlwfxdqzhwteb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101537.0724375-293-147901911427120/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 20:12:17 compute-0 sudo[122519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:17 compute-0 python3[122521]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:12:17 compute-0 sudo[122519]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:18 compute-0 sudo[122671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-easizpekgtconoejyomoepuickwelhxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101538.2248237-301-153629409199057/AnsiballZ_stat.py'
Nov 25 20:12:18 compute-0 sudo[122671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v221: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:18 compute-0 python3.9[122673]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:18 compute-0 sudo[122671]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:19 compute-0 sudo[122749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffiaibgtxtjgfppcmswqykzbqcxhihtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101538.2248237-301-153629409199057/AnsiballZ_file.py'
Nov 25 20:12:19 compute-0 sudo[122749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:19 compute-0 python3.9[122751]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:19 compute-0 sudo[122749]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:19 compute-0 ceph-mon[75144]: pgmap v221: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:20 compute-0 sudo[122901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevnhkftxrahqhotjiyybxztqzkhbvbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101539.6567461-313-68318628403099/AnsiballZ_stat.py'
Nov 25 20:12:20 compute-0 sudo[122901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:20 compute-0 python3.9[122903]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:20 compute-0 sudo[122901]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:20 compute-0 sudo[122979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qarbrapbuvtorjtbmqqtzlwgxigjajbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101539.6567461-313-68318628403099/AnsiballZ_file.py'
Nov 25 20:12:20 compute-0 sudo[122979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:20 compute-0 python3.9[122981]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:20 compute-0 sudo[122979]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v222: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:21 compute-0 sudo[123131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixbyamkjtwwfhqbknsabemechwbvytl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101540.9996822-325-279776503038457/AnsiballZ_stat.py'
Nov 25 20:12:21 compute-0 sudo[123131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:21 compute-0 python3.9[123133]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:21 compute-0 sudo[123131]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:21 compute-0 sudo[123209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqatsgijnucukapiovlielcxbjkupml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101540.9996822-325-279776503038457/AnsiballZ_file.py'
Nov 25 20:12:21 compute-0 sudo[123209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:21 compute-0 ceph-mon[75144]: pgmap v222: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:22 compute-0 python3.9[123211]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:22 compute-0 sudo[123209]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:22 compute-0 sudo[123361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-estsjlkbpkbykyldjlpmixpuuptzkoig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101542.305865-337-30748176693182/AnsiballZ_stat.py'
Nov 25 20:12:22 compute-0 sudo[123361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v223: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:22 compute-0 python3.9[123363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:23 compute-0 sudo[123361]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:23 compute-0 sudo[123439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spoounhrceboacjnfhcrwkcjaduhwskt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101542.305865-337-30748176693182/AnsiballZ_file.py'
Nov 25 20:12:23 compute-0 sudo[123439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:23 compute-0 python3.9[123441]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:23 compute-0 sudo[123439]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:23 compute-0 ceph-mon[75144]: pgmap v223: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:24 compute-0 sudo[123591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqbspzwaytxdrreynteaybjqfkvcokzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101543.7580638-349-266365545067227/AnsiballZ_stat.py'
Nov 25 20:12:24 compute-0 sudo[123591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:24 compute-0 python3.9[123593]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:24 compute-0 sudo[123591]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:24 compute-0 sudo[123669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epnbdpzgbirgbalogxiejpanhhtcqoft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101543.7580638-349-266365545067227/AnsiballZ_file.py'
Nov 25 20:12:24 compute-0 sudo[123669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v224: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:25 compute-0 python3.9[123671]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:25 compute-0 sudo[123669]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:25 compute-0 sudo[123821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypmbgnxryspxqhlhpidzvsleerzhbhji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101545.331539-362-128971411488398/AnsiballZ_command.py'
Nov 25 20:12:25 compute-0 sudo[123821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:25 compute-0 python3.9[123823]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:12:25 compute-0 sudo[123821]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:25 compute-0 ceph-mon[75144]: pgmap v224: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:26 compute-0 sudo[123976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxjlprpygpfkvbdsblioaqwvgatwxsxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101546.1064775-370-125788008840519/AnsiballZ_blockinfile.py'
Nov 25 20:12:26 compute-0 sudo[123976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:12:26 compute-0 python3.9[123978]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:26 compute-0 sudo[123976]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v225: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:27 compute-0 sudo[124128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbggljtmxpapknhsalyttpjkngvqtvia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101547.1259935-379-108613144571380/AnsiballZ_file.py'
Nov 25 20:12:27 compute-0 sudo[124128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:27 compute-0 python3.9[124130]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:27 compute-0 sudo[124128]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:27 compute-0 ceph-mon[75144]: pgmap v225: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:28 compute-0 sudo[124280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmnumkgvaladldyzmbcffgznzihewxtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101547.8775053-379-106852905384976/AnsiballZ_file.py'
Nov 25 20:12:28 compute-0 sudo[124280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:28 compute-0 python3.9[124282]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:28 compute-0 sudo[124280]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v226: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:29 compute-0 sudo[124432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtdgjlrmzmwsashnyxnalokixjxljqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101548.65248-394-253510598854516/AnsiballZ_mount.py'
Nov 25 20:12:29 compute-0 sudo[124432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:29 compute-0 python3.9[124434]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 20:12:29 compute-0 sudo[124432]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:29 compute-0 ceph-mon[75144]: pgmap v226: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:30 compute-0 sudo[124584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwcbxnspcghhnuyzawxdxozwvvhuzjwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101549.8732696-394-67258092694766/AnsiballZ_mount.py'
Nov 25 20:12:30 compute-0 sudo[124584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:30 compute-0 python3.9[124586]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 20:12:30 compute-0 sudo[124584]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:30 compute-0 sshd-session[116588]: Connection closed by 192.168.122.30 port 55832
Nov 25 20:12:30 compute-0 sshd-session[116585]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:12:30 compute-0 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Nov 25 20:12:30 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 20:12:30 compute-0 systemd[1]: session-40.scope: Consumed 33.879s CPU time.
Nov 25 20:12:30 compute-0 systemd-logind[789]: Removed session 40.
Nov 25 20:12:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v227: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:31 compute-0 ceph-mon[75144]: pgmap v227: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v228: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:33 compute-0 ceph-mon[75144]: pgmap v228: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v229: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:36 compute-0 ceph-mon[75144]: pgmap v229: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:36 compute-0 sshd-session[124612]: Accepted publickey for zuul from 192.168.122.30 port 38400 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:12:36 compute-0 systemd-logind[789]: New session 41 of user zuul.
Nov 25 20:12:36 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 25 20:12:36 compute-0 sshd-session[124612]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:12:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v230: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:36 compute-0 sudo[124765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdyhcaaroloshjboqdpxeqhdlgobbbjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101556.3503125-16-273180450569428/AnsiballZ_tempfile.py'
Nov 25 20:12:36 compute-0 sudo[124765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:37 compute-0 python3.9[124767]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 20:12:37 compute-0 sudo[124765]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:37 compute-0 sudo[124917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dorhazjrqcjcvxgtrcssouetecjqebvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101557.3283224-28-210898253376361/AnsiballZ_stat.py'
Nov 25 20:12:37 compute-0 sudo[124917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:38 compute-0 ceph-mon[75144]: pgmap v230: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:38 compute-0 python3.9[124919]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:12:38 compute-0 sudo[124917]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:38 compute-0 sudo[125071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdzrquuetgzplsbsbgivbwgivxikmic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101558.3305676-36-2836526633336/AnsiballZ_slurp.py'
Nov 25 20:12:38 compute-0 sudo[125071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v231: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:39 compute-0 python3.9[125073]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 20:12:39 compute-0 sudo[125071]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:39 compute-0 sudo[125223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szcqcwppikdkoruhlixhfwvaqgodnpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101559.3034458-44-173334699141714/AnsiballZ_stat.py'
Nov 25 20:12:39 compute-0 sudo[125223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:39 compute-0 python3.9[125225]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.etxmwucx follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:39 compute-0 sudo[125223]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:40 compute-0 ceph-mon[75144]: pgmap v231: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:40 compute-0 sudo[125348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpqiqioygzeitddljekjdhwcvdzyfrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101559.3034458-44-173334699141714/AnsiballZ_copy.py'
Nov 25 20:12:40 compute-0 sudo[125348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:40 compute-0 python3.9[125350]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.etxmwucx mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764101559.3034458-44-173334699141714/.source.etxmwucx _original_basename=.3c27wt6k follow=False checksum=c80e599a9215486a54015cc980bba48043e38097 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:40 compute-0 sudo[125348]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v232: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:40 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 20:12:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:41 compute-0 sudo[125502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtdqrahicfcapkedtepqffbawrguyalj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101560.9297264-59-68199932319310/AnsiballZ_setup.py'
Nov 25 20:12:41 compute-0 sudo[125502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:41 compute-0 python3.9[125504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:12:41 compute-0 sudo[125502]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:42 compute-0 ceph-mon[75144]: pgmap v232: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:42 compute-0 sudo[125654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfdkrewdzogghlwxqcrxwxcsamyywhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101562.229097-68-114378000747717/AnsiballZ_blockinfile.py'
Nov 25 20:12:42 compute-0 sudo[125654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v233: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:42 compute-0 python3.9[125656]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6AbHDFm6oiXOksIFhVfW+rKLsG5lsUMZ4h0LK/vi3baEm3lDOSCFiRaledzvCGw8pcbo5B5Ui9LwA6ZrurFA4EvBdRcNX2MYx8E7VQUBz19Cv5ssHGiokeLg/X8NRxvhizSNqEqTIXOBW/sjl2ML6B7c9Ho/On/2VOOogZqw39bPr58N1jZc8GGzZllxOMAGKQTrmbhrf2DDBl/eIvCnBeBarDQEuCXz7WY4Yg/5ExbD2MD4pVSgsmZKlZ3hZ/bGga19lvUoww5cRWp5mc1jmIEYS2Ns9Tam3tLAbA+4X02wq1hDbtpAOiV05naOPZcQ6NH8nyRFalVZ5JR9jJX31VllVhUB0J00We3tPSsAVeRWruGGvVcIZLpscmH3qIBb4ZpdiXwEBglE9K88PvEF5Q+ityKfnZBFAWx3pRzuVBMUZ+kKSL0KzJjdIcejX5wpTr9daIswPMC8qv8Bl3/6FNuXz9RqyUpIR5ujMgh8pQYJRGTx4LQoeVD95PGgEmW8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOnaumPCLJWozHeEwnBl9HIrTuoxcpbqSdFvByOBKVNO
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMt5rXHYYnmFaVy9amIUR4NjKK7m0LWd/U991zYz1D08AUE+ySzn4CMebmlNzvQuZCF/tJA3h93sOksMfGwh5Ds=
                                              create=True mode=0644 path=/tmp/ansible.etxmwucx state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:42 compute-0 sudo[125654]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:43 compute-0 ceph-mon[75144]: pgmap v233: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:43 compute-0 sudo[125806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncfxmbnykjqzeadqjbgcxqewmizisqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101563.1909106-76-169245018036051/AnsiballZ_command.py'
Nov 25 20:12:43 compute-0 sudo[125806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:43 compute-0 python3.9[125808]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.etxmwucx' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:12:43 compute-0 sudo[125806]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:44 compute-0 sudo[125960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubrziwjdgpxgmjidmhdspsgvxaxxdlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101564.1100762-84-65368811635376/AnsiballZ_file.py'
Nov 25 20:12:44 compute-0 sudo[125960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:44 compute-0 python3.9[125962]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.etxmwucx state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v234: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:44 compute-0 sudo[125960]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:45 compute-0 sshd-session[124615]: Connection closed by 192.168.122.30 port 38400
Nov 25 20:12:45 compute-0 sshd-session[124612]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:12:45 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 20:12:45 compute-0 systemd[1]: session-41.scope: Consumed 6.204s CPU time.
Nov 25 20:12:45 compute-0 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Nov 25 20:12:45 compute-0 systemd-logind[789]: Removed session 41.
Nov 25 20:12:45 compute-0 ceph-mon[75144]: pgmap v234: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v235: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:47 compute-0 ceph-mon[75144]: pgmap v235: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v236: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:49 compute-0 ceph-mon[75144]: pgmap v236: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:50 compute-0 sshd-session[125987]: Accepted publickey for zuul from 192.168.122.30 port 49244 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:12:50 compute-0 systemd-logind[789]: New session 42 of user zuul.
Nov 25 20:12:50 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 25 20:12:50 compute-0 sshd-session[125987]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:12:50 compute-0 sshd-session[71419]: Received disconnect from 38.102.83.150 port 52102:11: disconnected by user
Nov 25 20:12:50 compute-0 sshd-session[71419]: Disconnected from user zuul 38.102.83.150 port 52102
Nov 25 20:12:50 compute-0 sshd-session[71416]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:12:50 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 20:12:50 compute-0 systemd[1]: session-18.scope: Consumed 1min 27.581s CPU time.
Nov 25 20:12:50 compute-0 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Nov 25 20:12:50 compute-0 systemd-logind[789]: Removed session 18.
Nov 25 20:12:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v237: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:51 compute-0 python3.9[126140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:12:51 compute-0 ceph-mon[75144]: pgmap v237: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:52 compute-0 sudo[126294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhsowmktxnqvzagsplfbidhqwlrnwmsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101571.8505955-32-114709770212365/AnsiballZ_systemd.py'
Nov 25 20:12:52 compute-0 sudo[126294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:52 compute-0 python3.9[126296]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 20:12:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v238: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:53 compute-0 sudo[126294]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:53 compute-0 ceph-mon[75144]: pgmap v238: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:54 compute-0 sudo[126448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxoqynhjucbtdrczpdsypmeheqrqdgyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101574.1694045-40-249053095822853/AnsiballZ_systemd.py'
Nov 25 20:12:54 compute-0 sudo[126448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:54 compute-0 python3.9[126450]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:12:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v239: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:54 compute-0 sudo[126448]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:55 compute-0 sudo[126601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgqscblwvvydpxcdygrvzqofiomkuktk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101575.130586-49-41436976900160/AnsiballZ_command.py'
Nov 25 20:12:55 compute-0 sudo[126601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:55 compute-0 python3.9[126603]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:12:55 compute-0 sudo[126601]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:55 compute-0 ceph-mon[75144]: pgmap v239: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:12:56 compute-0 sudo[126754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eofjphibrrtisjpzykxyqgbfdwejkghv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101576.1340532-57-7685798199819/AnsiballZ_stat.py'
Nov 25 20:12:56 compute-0 sudo[126754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v240: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:56 compute-0 python3.9[126756]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:12:56 compute-0 sudo[126754]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:12:56
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', 'volumes']
Nov 25 20:12:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:12:57 compute-0 sudo[126906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabjeiprnutwivrvohpjaobhqbxtxnmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101577.156505-66-280604851847120/AnsiballZ_file.py'
Nov 25 20:12:57 compute-0 sudo[126906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:12:57 compute-0 python3.9[126908]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:57 compute-0 sudo[126906]: pam_unix(sudo:session): session closed for user root
Nov 25 20:12:58 compute-0 ceph-mon[75144]: pgmap v240: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:12:58 compute-0 sshd-session[125990]: Connection closed by 192.168.122.30 port 49244
Nov 25 20:12:58 compute-0 sshd-session[125987]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:12:58 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 20:12:58 compute-0 systemd[1]: session-42.scope: Consumed 4.542s CPU time.
Nov 25 20:12:58 compute-0 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Nov 25 20:12:58 compute-0 systemd-logind[789]: Removed session 42.
Nov 25 20:12:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v241: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:00 compute-0 ceph-mon[75144]: pgmap v241: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v242: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:01 compute-0 ceph-mon[75144]: pgmap v242: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:13:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v243: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:03 compute-0 sshd-session[126933]: Accepted publickey for zuul from 192.168.122.30 port 45152 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:13:03 compute-0 systemd-logind[789]: New session 43 of user zuul.
Nov 25 20:13:03 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 25 20:13:03 compute-0 sshd-session[126933]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:13:03 compute-0 ceph-mon[75144]: pgmap v243: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:04 compute-0 python3.9[127086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:13:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v244: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:05 compute-0 sudo[127240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuighzcbswkydqvwlaifwjtqlglxsbdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101585.2600372-34-84128215422671/AnsiballZ_setup.py'
Nov 25 20:13:05 compute-0 sudo[127240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:05 compute-0 python3.9[127242]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:13:05 compute-0 ceph-mon[75144]: pgmap v244: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:06 compute-0 sudo[127249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:06 compute-0 sudo[127249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:06 compute-0 sudo[127249]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 sudo[127240]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 sudo[127276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:13:06 compute-0 sudo[127276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:06 compute-0 sudo[127276]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:06 compute-0 sudo[127301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:06 compute-0 sudo[127301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:06 compute-0 sudo[127301]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 sudo[127326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:13:06 compute-0 sudo[127326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:06 compute-0 sudo[127453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmbqdvpjeqmxtxkhuephpnhywmyrxhwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101585.2600372-34-84128215422671/AnsiballZ_dnf.py'
Nov 25 20:13:06 compute-0 sudo[127453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:06 compute-0 sudo[127326]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:13:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:13:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:13:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:13:06 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 41403e22-65ee-41b2-a73c-95459e340ce3 does not exist
Nov 25 20:13:06 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6c74f3a7-ea89-4f13-b327-f43ccdb3a471 does not exist
Nov 25 20:13:06 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d605d21c-bfba-4fc8-ae6a-2c9b5a8d119c does not exist
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:13:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:13:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:13:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:13:06 compute-0 sudo[127459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:06 compute-0 sudo[127459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:06 compute-0 sudo[127459]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v245: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:06 compute-0 sudo[127484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:13:06 compute-0 sudo[127484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:06 compute-0 sudo[127484]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:13:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:13:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:13:06 compute-0 python3.9[127458]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 20:13:07 compute-0 sudo[127509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:07 compute-0 sudo[127509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:07 compute-0 sudo[127509]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:07 compute-0 sudo[127535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:13:07 compute-0 sudo[127535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.449443111 +0000 UTC m=+0.041529040 container create 932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_keller, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:13:07 compute-0 systemd[1]: Started libpod-conmon-932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566.scope.
Nov 25 20:13:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.433158476 +0000 UTC m=+0.025244425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.544045728 +0000 UTC m=+0.136131677 container init 932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.550687365 +0000 UTC m=+0.142773324 container start 932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.555433832 +0000 UTC m=+0.147519781 container attach 932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:13:07 compute-0 jovial_keller[127619]: 167 167
Nov 25 20:13:07 compute-0 systemd[1]: libpod-932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566.scope: Deactivated successfully.
Nov 25 20:13:07 compute-0 conmon[127619]: conmon 932477ee980b3e3092e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566.scope/container/memory.events
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.560784374 +0000 UTC m=+0.152870343 container died 932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_keller, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8f67326459be77f97562772bf0e3089dbc021de066e5a7b43fec7bad6f8f29b-merged.mount: Deactivated successfully.
Nov 25 20:13:07 compute-0 podman[127602]: 2025-11-25 20:13:07.618387782 +0000 UTC m=+0.210473751 container remove 932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_keller, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:13:07 compute-0 systemd[1]: libpod-conmon-932477ee980b3e3092e1a8bc3d84333c862a6238c6d874d4aae3b838ec530566.scope: Deactivated successfully.
Nov 25 20:13:07 compute-0 podman[127643]: 2025-11-25 20:13:07.816058871 +0000 UTC m=+0.054847856 container create 3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:13:07 compute-0 systemd[1]: Started libpod-conmon-3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d.scope.
Nov 25 20:13:07 compute-0 podman[127643]: 2025-11-25 20:13:07.796339085 +0000 UTC m=+0.035128060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:13:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bce81073cc9d56c713bd4c4de27f825f9d43307596cd2814479f4f7eeaabde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bce81073cc9d56c713bd4c4de27f825f9d43307596cd2814479f4f7eeaabde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bce81073cc9d56c713bd4c4de27f825f9d43307596cd2814479f4f7eeaabde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bce81073cc9d56c713bd4c4de27f825f9d43307596cd2814479f4f7eeaabde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bce81073cc9d56c713bd4c4de27f825f9d43307596cd2814479f4f7eeaabde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:07 compute-0 podman[127643]: 2025-11-25 20:13:07.917565152 +0000 UTC m=+0.156354097 container init 3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:13:07 compute-0 podman[127643]: 2025-11-25 20:13:07.924834175 +0000 UTC m=+0.163623160 container start 3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:13:07 compute-0 podman[127643]: 2025-11-25 20:13:07.931184155 +0000 UTC m=+0.169973120 container attach 3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:13:07 compute-0 ceph-mon[75144]: pgmap v245: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:08 compute-0 sudo[127453]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v246: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:09 compute-0 nostalgic_boyd[127660]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:13:09 compute-0 nostalgic_boyd[127660]: --> relative data size: 1.0
Nov 25 20:13:09 compute-0 nostalgic_boyd[127660]: --> All data devices are unavailable
Nov 25 20:13:09 compute-0 systemd[1]: libpod-3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d.scope: Deactivated successfully.
Nov 25 20:13:09 compute-0 systemd[1]: libpod-3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d.scope: Consumed 1.099s CPU time.
Nov 25 20:13:09 compute-0 podman[127839]: 2025-11-25 20:13:09.16268554 +0000 UTC m=+0.042748853 container died 3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-58bce81073cc9d56c713bd4c4de27f825f9d43307596cd2814479f4f7eeaabde-merged.mount: Deactivated successfully.
Nov 25 20:13:09 compute-0 podman[127839]: 2025-11-25 20:13:09.248529622 +0000 UTC m=+0.128592905 container remove 3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:13:09 compute-0 systemd[1]: libpod-conmon-3ea144be2f0c3c49fab12f32d9662f54b12bb5c40bac1161be0f8e25264d3b3d.scope: Deactivated successfully.
Nov 25 20:13:09 compute-0 python3.9[127838]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:13:09 compute-0 sudo[127535]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:09 compute-0 sudo[127855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:09 compute-0 sudo[127855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:09 compute-0 sudo[127855]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:09 compute-0 sudo[127880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:13:09 compute-0 sudo[127880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:09 compute-0 sudo[127880]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:09 compute-0 sudo[127905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:09 compute-0 sudo[127905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:09 compute-0 sudo[127905]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:09 compute-0 sudo[127930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:13:09 compute-0 sudo[127930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:09 compute-0 podman[128019]: 2025-11-25 20:13:09.897664717 +0000 UTC m=+0.046912034 container create 205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:13:09 compute-0 systemd[1]: Started libpod-conmon-205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660.scope.
Nov 25 20:13:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:13:09 compute-0 ceph-mon[75144]: pgmap v246: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:09 compute-0 podman[128019]: 2025-11-25 20:13:09.880351014 +0000 UTC m=+0.029598361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:13:09 compute-0 podman[128019]: 2025-11-25 20:13:09.985107351 +0000 UTC m=+0.134354788 container init 205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:13:09 compute-0 podman[128019]: 2025-11-25 20:13:09.992669714 +0000 UTC m=+0.141917071 container start 205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:13:09 compute-0 podman[128019]: 2025-11-25 20:13:09.996378012 +0000 UTC m=+0.145625359 container attach 205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:13:09 compute-0 festive_lederberg[128035]: 167 167
Nov 25 20:13:10 compute-0 systemd[1]: libpod-205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660.scope: Deactivated successfully.
Nov 25 20:13:10 compute-0 podman[128019]: 2025-11-25 20:13:10.001354455 +0000 UTC m=+0.150601812 container died 205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e6f66706b2f6cf5d5793e62adcec8eea5adf461ccc961ccf36bf5da5f8ea614-merged.mount: Deactivated successfully.
Nov 25 20:13:10 compute-0 podman[128019]: 2025-11-25 20:13:10.057578477 +0000 UTC m=+0.206825834 container remove 205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:13:10 compute-0 systemd[1]: libpod-conmon-205f1894d820c33809dfcfeebc8df376b96690a199e558bcd616928c19fbc660.scope: Deactivated successfully.
Nov 25 20:13:10 compute-0 podman[128109]: 2025-11-25 20:13:10.242985167 +0000 UTC m=+0.067694498 container create b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:13:10 compute-0 systemd[1]: Started libpod-conmon-b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55.scope.
Nov 25 20:13:10 compute-0 podman[128109]: 2025-11-25 20:13:10.216438218 +0000 UTC m=+0.041147569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:13:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afe99b9a517e397bd981959f107c6375f142e7b346af8c4203e0d71f91865e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afe99b9a517e397bd981959f107c6375f142e7b346af8c4203e0d71f91865e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afe99b9a517e397bd981959f107c6375f142e7b346af8c4203e0d71f91865e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afe99b9a517e397bd981959f107c6375f142e7b346af8c4203e0d71f91865e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:10 compute-0 podman[128109]: 2025-11-25 20:13:10.346104661 +0000 UTC m=+0.170814042 container init b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:13:10 compute-0 podman[128109]: 2025-11-25 20:13:10.367400659 +0000 UTC m=+0.192109990 container start b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:13:10 compute-0 podman[128109]: 2025-11-25 20:13:10.372328441 +0000 UTC m=+0.197037782 container attach b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:13:10 compute-0 python3.9[128203]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:13:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v247: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:11 compute-0 strange_feistel[128129]: {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:     "0": [
Nov 25 20:13:11 compute-0 strange_feistel[128129]:         {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "devices": [
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "/dev/loop3"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             ],
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_name": "ceph_lv0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_size": "21470642176",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "name": "ceph_lv0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "tags": {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cluster_name": "ceph",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.crush_device_class": "",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.encrypted": "0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osd_id": "0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.type": "block",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.vdo": "0"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             },
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "type": "block",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "vg_name": "ceph_vg0"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:         }
Nov 25 20:13:11 compute-0 strange_feistel[128129]:     ],
Nov 25 20:13:11 compute-0 strange_feistel[128129]:     "1": [
Nov 25 20:13:11 compute-0 strange_feistel[128129]:         {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "devices": [
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "/dev/loop4"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             ],
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_name": "ceph_lv1",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_size": "21470642176",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "name": "ceph_lv1",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "tags": {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cluster_name": "ceph",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.crush_device_class": "",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.encrypted": "0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osd_id": "1",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.type": "block",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.vdo": "0"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             },
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "type": "block",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "vg_name": "ceph_vg1"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:         }
Nov 25 20:13:11 compute-0 strange_feistel[128129]:     ],
Nov 25 20:13:11 compute-0 strange_feistel[128129]:     "2": [
Nov 25 20:13:11 compute-0 strange_feistel[128129]:         {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "devices": [
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "/dev/loop5"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             ],
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_name": "ceph_lv2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_size": "21470642176",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "name": "ceph_lv2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "tags": {
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.cluster_name": "ceph",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.crush_device_class": "",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.encrypted": "0",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osd_id": "2",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.type": "block",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:                 "ceph.vdo": "0"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             },
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "type": "block",
Nov 25 20:13:11 compute-0 strange_feistel[128129]:             "vg_name": "ceph_vg2"
Nov 25 20:13:11 compute-0 strange_feistel[128129]:         }
Nov 25 20:13:11 compute-0 strange_feistel[128129]:     ]
Nov 25 20:13:11 compute-0 strange_feistel[128129]: }
Nov 25 20:13:11 compute-0 systemd[1]: libpod-b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55.scope: Deactivated successfully.
Nov 25 20:13:11 compute-0 podman[128109]: 2025-11-25 20:13:11.158355851 +0000 UTC m=+0.983065152 container died b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2afe99b9a517e397bd981959f107c6375f142e7b346af8c4203e0d71f91865e5-merged.mount: Deactivated successfully.
Nov 25 20:13:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:11 compute-0 podman[128109]: 2025-11-25 20:13:11.213513243 +0000 UTC m=+1.038222584 container remove b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:13:11 compute-0 systemd[1]: libpod-conmon-b3da2a852bcab12a5f0fa246ca7fa759e30e6d7d740066b4074a87338aaeaf55.scope: Deactivated successfully.
Nov 25 20:13:11 compute-0 sudo[127930]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:11 compute-0 sudo[128298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:11 compute-0 sudo[128298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:11 compute-0 sudo[128298]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:11 compute-0 sudo[128350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:13:11 compute-0 sudo[128350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:11 compute-0 sudo[128350]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:11 compute-0 sudo[128402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:11 compute-0 sudo[128402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:11 compute-0 sudo[128402]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:11 compute-0 sudo[128445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:13:11 compute-0 sudo[128445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:11 compute-0 python3.9[128437]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:13:11 compute-0 podman[128551]: 2025-11-25 20:13:11.945378067 +0000 UTC m=+0.061325359 container create 1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 20:13:11 compute-0 ceph-mon[75144]: pgmap v247: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:11 compute-0 systemd[1]: Started libpod-conmon-1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997.scope.
Nov 25 20:13:12 compute-0 podman[128551]: 2025-11-25 20:13:11.923879173 +0000 UTC m=+0.039826505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:13:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:13:12 compute-0 podman[128551]: 2025-11-25 20:13:12.052321222 +0000 UTC m=+0.168268534 container init 1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:13:12 compute-0 podman[128551]: 2025-11-25 20:13:12.060046079 +0000 UTC m=+0.175993361 container start 1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:13:12 compute-0 podman[128551]: 2025-11-25 20:13:12.063547252 +0000 UTC m=+0.179494534 container attach 1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:13:12 compute-0 keen_cray[128602]: 167 167
Nov 25 20:13:12 compute-0 systemd[1]: libpod-1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997.scope: Deactivated successfully.
Nov 25 20:13:12 compute-0 podman[128551]: 2025-11-25 20:13:12.067531548 +0000 UTC m=+0.183478840 container died 1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbeccf17b95318d2dbd4719589d224f4d626d7de9bc90b5752ceeb362fcb2685-merged.mount: Deactivated successfully.
Nov 25 20:13:12 compute-0 podman[128551]: 2025-11-25 20:13:12.111690788 +0000 UTC m=+0.227638090 container remove 1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:13:12 compute-0 systemd[1]: libpod-conmon-1483da9ed86d1a13124ea0de390e36ec1387dc3c5de1dfd7590eaa5b7ca12997.scope: Deactivated successfully.
Nov 25 20:13:12 compute-0 podman[128694]: 2025-11-25 20:13:12.322742493 +0000 UTC m=+0.056005377 container create 5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:13:12 compute-0 systemd[1]: Started libpod-conmon-5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd.scope.
Nov 25 20:13:12 compute-0 podman[128694]: 2025-11-25 20:13:12.304346222 +0000 UTC m=+0.037609086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:13:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b0f8a857f4f9333cac1b63517c3bd285f6d618b02b3c4307f60e72a60f3783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b0f8a857f4f9333cac1b63517c3bd285f6d618b02b3c4307f60e72a60f3783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b0f8a857f4f9333cac1b63517c3bd285f6d618b02b3c4307f60e72a60f3783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b0f8a857f4f9333cac1b63517c3bd285f6d618b02b3c4307f60e72a60f3783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:13:12 compute-0 podman[128694]: 2025-11-25 20:13:12.418818719 +0000 UTC m=+0.152081613 container init 5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:13:12 compute-0 podman[128694]: 2025-11-25 20:13:12.42709519 +0000 UTC m=+0.160358044 container start 5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:13:12 compute-0 podman[128694]: 2025-11-25 20:13:12.430319836 +0000 UTC m=+0.163582690 container attach 5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:13:12 compute-0 python3.9[128701]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:13:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v248: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:12 compute-0 sshd-session[126936]: Connection closed by 192.168.122.30 port 45152
Nov 25 20:13:12 compute-0 sshd-session[126933]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:13:13 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 20:13:13 compute-0 systemd[1]: session-43.scope: Consumed 6.571s CPU time.
Nov 25 20:13:13 compute-0 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Nov 25 20:13:13 compute-0 systemd-logind[789]: Removed session 43.
Nov 25 20:13:13 compute-0 exciting_banach[128716]: {
Nov 25 20:13:13 compute-0 exciting_banach[128716]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "osd_id": 2,
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "type": "bluestore"
Nov 25 20:13:13 compute-0 exciting_banach[128716]:     },
Nov 25 20:13:13 compute-0 exciting_banach[128716]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "osd_id": 1,
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "type": "bluestore"
Nov 25 20:13:13 compute-0 exciting_banach[128716]:     },
Nov 25 20:13:13 compute-0 exciting_banach[128716]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "osd_id": 0,
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:13:13 compute-0 exciting_banach[128716]:         "type": "bluestore"
Nov 25 20:13:13 compute-0 exciting_banach[128716]:     }
Nov 25 20:13:13 compute-0 exciting_banach[128716]: }
Nov 25 20:13:13 compute-0 systemd[1]: libpod-5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd.scope: Deactivated successfully.
Nov 25 20:13:13 compute-0 podman[128773]: 2025-11-25 20:13:13.458574744 +0000 UTC m=+0.034116762 container died 5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-95b0f8a857f4f9333cac1b63517c3bd285f6d618b02b3c4307f60e72a60f3783-merged.mount: Deactivated successfully.
Nov 25 20:13:13 compute-0 podman[128773]: 2025-11-25 20:13:13.526552019 +0000 UTC m=+0.102094027 container remove 5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 20:13:13 compute-0 systemd[1]: libpod-conmon-5fe65ff5805dbe2953340ebec8ca663fc2c3c8eed4744baf03bcd82d012da3dd.scope: Deactivated successfully.
Nov 25 20:13:13 compute-0 sudo[128445]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:13:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:13:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:13:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:13:13 compute-0 sudo[128788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:13:13 compute-0 sudo[128788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:13 compute-0 sudo[128788]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:13 compute-0 sudo[128813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:13:13 compute-0 sudo[128813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:13:13 compute-0 sudo[128813]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:14 compute-0 ceph-mon[75144]: pgmap v248: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:14 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:13:14 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:13:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v249: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:16 compute-0 ceph-mon[75144]: pgmap v249: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v250: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:18 compute-0 ceph-mon[75144]: pgmap v250: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:18 compute-0 sshd-session[128838]: Accepted publickey for zuul from 192.168.122.30 port 51004 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:13:18 compute-0 systemd-logind[789]: New session 44 of user zuul.
Nov 25 20:13:18 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 25 20:13:18 compute-0 sshd-session[128838]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:13:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v251: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:19 compute-0 python3.9[128991]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:13:20 compute-0 ceph-mon[75144]: pgmap v251: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v252: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:21 compute-0 sudo[129145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvbvalufrgxnpixtojovlexzrxhnsdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101600.8581536-50-247576993366603/AnsiballZ_file.py'
Nov 25 20:13:21 compute-0 sudo[129145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:21 compute-0 python3.9[129147]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:21 compute-0 sudo[129145]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:22 compute-0 ceph-mon[75144]: pgmap v252: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:22 compute-0 sudo[129297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdpjfyrphrywteexwiavbadforpofdjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101601.7689588-50-211667418914921/AnsiballZ_file.py'
Nov 25 20:13:22 compute-0 sudo[129297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:22 compute-0 python3.9[129299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:22 compute-0 sudo[129297]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v253: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:23 compute-0 ceph-mon[75144]: pgmap v253: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:23 compute-0 sudo[129449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgsamuyenxdtcwdxocdpuobejaglvpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101602.5745535-65-257619076225684/AnsiballZ_stat.py'
Nov 25 20:13:23 compute-0 sudo[129449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:23 compute-0 python3.9[129451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:23 compute-0 sudo[129449]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:23 compute-0 sudo[129572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoecqzeikijfxqthgwyfaugpvawgdrgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101602.5745535-65-257619076225684/AnsiballZ_copy.py'
Nov 25 20:13:23 compute-0 sudo[129572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:24 compute-0 python3.9[129574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101602.5745535-65-257619076225684/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2afcdcf1ecb0ce493773448da0a356edd010d701 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:24 compute-0 sudo[129572]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:24 compute-0 sudo[129724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbujvcknkthbttjpjybhgosqgxbtyqix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101604.3472683-65-45592770320277/AnsiballZ_stat.py'
Nov 25 20:13:24 compute-0 sudo[129724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v254: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:24 compute-0 python3.9[129726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:24 compute-0 sudo[129724]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:25 compute-0 sudo[129847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbiotvfywhiaaybqcnezbsoomoerlis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101604.3472683-65-45592770320277/AnsiballZ_copy.py'
Nov 25 20:13:25 compute-0 sudo[129847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:25 compute-0 python3.9[129849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101604.3472683-65-45592770320277/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=060afe98b04b6e4625fa31f4a675f795cca99736 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:25 compute-0 sudo[129847]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:25 compute-0 ceph-mon[75144]: pgmap v254: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:26 compute-0 sudo[129999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjyzzfcsbhtsuwwduryuyteoqbqyvezq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101605.731906-65-174092673727623/AnsiballZ_stat.py'
Nov 25 20:13:26 compute-0 sudo[129999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:26 compute-0 python3.9[130001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:26 compute-0 sudo[129999]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:26 compute-0 sudo[130122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frtrqcseronebotsvxmitjjzlsuxxkmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101605.731906-65-174092673727623/AnsiballZ_copy.py'
Nov 25 20:13:26 compute-0 sudo[130122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:13:26 compute-0 python3.9[130124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101605.731906-65-174092673727623/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0be9b5e815f17a021fb3ea99c02a0dccb8730633 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v255: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:26 compute-0 sudo[130122]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:27 compute-0 sudo[130274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdiobstohtjuwdpgcsdwcqhnzsbmfweg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101607.1836307-109-144500087311427/AnsiballZ_file.py'
Nov 25 20:13:27 compute-0 sudo[130274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:27 compute-0 python3.9[130276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:27 compute-0 sudo[130274]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:27 compute-0 ceph-mon[75144]: pgmap v255: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:28 compute-0 sudo[130426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mapjhsyeahbbiknnxmcmwksjaafzwdfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101607.9752574-109-158005196953011/AnsiballZ_file.py'
Nov 25 20:13:28 compute-0 sudo[130426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:28 compute-0 python3.9[130428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:28 compute-0 sudo[130426]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v256: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:29 compute-0 sudo[130578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lznivyvxshvyjcvayavsvdhqjvpalxgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101608.7601469-124-263223192940165/AnsiballZ_stat.py'
Nov 25 20:13:29 compute-0 sudo[130578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:29 compute-0 python3.9[130580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:29 compute-0 sudo[130578]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:30 compute-0 ceph-mon[75144]: pgmap v256: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:30 compute-0 sudo[130702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kurggsyhlanyqkngsnananbhioefwppz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101608.7601469-124-263223192940165/AnsiballZ_copy.py'
Nov 25 20:13:30 compute-0 sudo[130702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:30 compute-0 python3.9[130704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101608.7601469-124-263223192940165/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=780e04eca5935433a8f1846cc210ba69137222db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:30 compute-0 sudo[130702]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v257: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:31 compute-0 ceph-mon[75144]: pgmap v257: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:31 compute-0 sudo[130854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfambxsxrkdnuwssnqvsevxklriwmapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101610.8837972-124-89948552440333/AnsiballZ_stat.py'
Nov 25 20:13:31 compute-0 sudo[130854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:31 compute-0 python3.9[130856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:31 compute-0 sudo[130854]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:31 compute-0 sudo[130977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeufmktgsmfakywunjditjpvmamkxmpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101610.8837972-124-89948552440333/AnsiballZ_copy.py'
Nov 25 20:13:31 compute-0 sudo[130977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:32 compute-0 python3.9[130979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101610.8837972-124-89948552440333/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8a821c9243550a9acb496066f406163465ed114e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:32 compute-0 sudo[130977]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:32 compute-0 sudo[131129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdbbbjbmsvupfzesjkybitkdrbwpaaur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101612.2794414-124-77492740744611/AnsiballZ_stat.py'
Nov 25 20:13:32 compute-0 sudo[131129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:32 compute-0 python3.9[131131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:32 compute-0 sudo[131129]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v258: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:33 compute-0 sudo[131252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trtadvlaldmvgvfrqvikopfpmkjzyknp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101612.2794414-124-77492740744611/AnsiballZ_copy.py'
Nov 25 20:13:33 compute-0 sudo[131252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:33 compute-0 python3.9[131254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101612.2794414-124-77492740744611/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6eb116bbbd1eb1b40395951fe1eed07086e5fd32 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:33 compute-0 sudo[131252]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:33 compute-0 ceph-mon[75144]: pgmap v258: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:34 compute-0 sudo[131404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzmqzybxwnmdecvizbbkmggemmonzxfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101613.695869-168-259579758133831/AnsiballZ_file.py'
Nov 25 20:13:34 compute-0 sudo[131404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:34 compute-0 python3.9[131406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:34 compute-0 sudo[131404]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:34 compute-0 sudo[131556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkzwvsojacscnezxphudcddisygwicn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101614.4114237-168-210841341112834/AnsiballZ_file.py'
Nov 25 20:13:34 compute-0 sudo[131556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v259: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:35 compute-0 python3.9[131558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:35 compute-0 sudo[131556]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:35 compute-0 sudo[131708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjxooliigpbfiipakfnokyxfcgduzhnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101615.2470052-183-71716678412807/AnsiballZ_stat.py'
Nov 25 20:13:35 compute-0 sudo[131708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:35 compute-0 python3.9[131710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:35 compute-0 sudo[131708]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:35 compute-0 ceph-mon[75144]: pgmap v259: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:36 compute-0 sudo[131831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xraeklrvhirwjrsjcgojxrqdkbyadsho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101615.2470052-183-71716678412807/AnsiballZ_copy.py'
Nov 25 20:13:36 compute-0 sudo[131831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:36 compute-0 python3.9[131833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101615.2470052-183-71716678412807/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=102f0c4db9d86e005d3d8a9277d145249a5f8b4f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:36 compute-0 sudo[131831]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v260: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:37 compute-0 sudo[131983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbzktpalygfogsboyakqvizixfjggmrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101616.678907-183-146186131054889/AnsiballZ_stat.py'
Nov 25 20:13:37 compute-0 sudo[131983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:37 compute-0 python3.9[131985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:37 compute-0 sudo[131983]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:37 compute-0 sudo[132106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujhzljlfrnpiocffpuouuehstzqwupua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101616.678907-183-146186131054889/AnsiballZ_copy.py'
Nov 25 20:13:37 compute-0 sudo[132106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:37 compute-0 python3.9[132108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101616.678907-183-146186131054889/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8a821c9243550a9acb496066f406163465ed114e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:37 compute-0 sudo[132106]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:37 compute-0 ceph-mon[75144]: pgmap v260: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:38 compute-0 sudo[132258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alcrievaabwboiujzuxcnayjetihebtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101618.124623-183-168519384572714/AnsiballZ_stat.py'
Nov 25 20:13:38 compute-0 sudo[132258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:38 compute-0 python3.9[132260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:38 compute-0 sudo[132258]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v261: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:39 compute-0 sudo[132381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jalasfxdhhwesxozyefzbdbtkpjusnqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101618.124623-183-168519384572714/AnsiballZ_copy.py'
Nov 25 20:13:39 compute-0 sudo[132381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:39 compute-0 python3.9[132383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101618.124623-183-168519384572714/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6dd343fe13139d83c90fa37354e4869eff128435 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:39 compute-0 sudo[132381]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:39 compute-0 ceph-mon[75144]: pgmap v261: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:40 compute-0 sudo[132533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkkvutdtksrygyaptgrvinywzxcglgzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101620.2685184-243-223438974968744/AnsiballZ_file.py'
Nov 25 20:13:40 compute-0 sudo[132533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:40 compute-0 python3.9[132535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:40 compute-0 sudo[132533]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v262: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:41 compute-0 sudo[132685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erqllhktxajcgziohmjadbqexoukcxtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101621.1107268-251-104623241882694/AnsiballZ_stat.py'
Nov 25 20:13:41 compute-0 sudo[132685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:41 compute-0 python3.9[132687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:41 compute-0 sudo[132685]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:41 compute-0 ceph-mon[75144]: pgmap v262: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:42 compute-0 sudo[132808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdudbjzjhvrxdtlusvnoypwrapbpwjsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101621.1107268-251-104623241882694/AnsiballZ_copy.py'
Nov 25 20:13:42 compute-0 sudo[132808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:42 compute-0 python3.9[132810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101621.1107268-251-104623241882694/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:42 compute-0 sudo[132808]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v263: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:43 compute-0 sudo[132960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evlvhubtfwvywotxppvebadcsgdijatt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101622.7311244-267-22803025979740/AnsiballZ_file.py'
Nov 25 20:13:43 compute-0 sudo[132960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:43 compute-0 python3.9[132962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:43 compute-0 sudo[132960]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:43 compute-0 sudo[133112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqpeqazmxodxjjbotosmznrpbqqabpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101623.5711823-275-191175115641819/AnsiballZ_stat.py'
Nov 25 20:13:43 compute-0 sudo[133112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:44 compute-0 ceph-mon[75144]: pgmap v263: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:44 compute-0 python3.9[133114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:44 compute-0 sudo[133112]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:44 compute-0 sudo[133235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rariecwckwrndcdednykenaehyfzhnsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101623.5711823-275-191175115641819/AnsiballZ_copy.py'
Nov 25 20:13:44 compute-0 sudo[133235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v264: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:44 compute-0 python3.9[133237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101623.5711823-275-191175115641819/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:44 compute-0 sudo[133235]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:45 compute-0 sudo[133387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plesxqyhmrhjgdbwducskfzdmiweierz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101625.2376323-291-51365622137551/AnsiballZ_file.py'
Nov 25 20:13:45 compute-0 sudo[133387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:45 compute-0 python3.9[133389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:45 compute-0 sudo[133387]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:46 compute-0 ceph-mon[75144]: pgmap v264: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:46 compute-0 sudo[133539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqcsbgepwdkmyxfrbiamcqpejzyswusa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101626.0779471-299-9191914794286/AnsiballZ_stat.py'
Nov 25 20:13:46 compute-0 sudo[133539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:46 compute-0 python3.9[133541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:46 compute-0 sudo[133539]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v265: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:47 compute-0 sudo[133662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nerqjvyiarxgtaepljsykbzonkrdzhit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101626.0779471-299-9191914794286/AnsiballZ_copy.py'
Nov 25 20:13:47 compute-0 sudo[133662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:47 compute-0 python3.9[133664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101626.0779471-299-9191914794286/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:47 compute-0 sudo[133662]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:48 compute-0 sudo[133814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ardemkebnanpkzqfrgpqproiomltoxkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101627.654907-315-65910163402233/AnsiballZ_file.py'
Nov 25 20:13:48 compute-0 sudo[133814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:48 compute-0 ceph-mon[75144]: pgmap v265: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:48 compute-0 python3.9[133816]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:48 compute-0 sudo[133814]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:48 compute-0 sudo[133966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fazrkdmjxovymokkalmnceartviipnpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101628.4591982-323-196635464329544/AnsiballZ_stat.py'
Nov 25 20:13:48 compute-0 sudo[133966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v266: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:49 compute-0 python3.9[133968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:49 compute-0 sudo[133966]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:49 compute-0 sudo[134089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqztonodyapmtsmcnakbgsijrldmjkow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101628.4591982-323-196635464329544/AnsiballZ_copy.py'
Nov 25 20:13:49 compute-0 sudo[134089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:49 compute-0 python3.9[134091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101628.4591982-323-196635464329544/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:49 compute-0 sudo[134089]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:50 compute-0 ceph-mon[75144]: pgmap v266: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:50 compute-0 sudo[134241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjmqbulopuuxsoqimkhrfggouniyyzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101630.0620055-339-41988419346853/AnsiballZ_file.py'
Nov 25 20:13:50 compute-0 sudo[134241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:50 compute-0 python3.9[134243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:50 compute-0 sudo[134241]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v267: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:51 compute-0 sudo[134393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfrzyfifrsndogajkrqzbkszxikqibyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101630.9254892-347-138636780109221/AnsiballZ_stat.py'
Nov 25 20:13:51 compute-0 sudo[134393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:51 compute-0 python3.9[134395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:51 compute-0 sudo[134393]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:52 compute-0 sudo[134516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnlkabtjxgogjfmegrwnuqrawoayfrvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101630.9254892-347-138636780109221/AnsiballZ_copy.py'
Nov 25 20:13:52 compute-0 sudo[134516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:52 compute-0 ceph-mon[75144]: pgmap v267: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.056564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632057167, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6435, "num_deletes": 251, "total_data_size": 6998547, "memory_usage": 7249312, "flush_reason": "Manual Compaction"}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632095135, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5270005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 140, "largest_seqno": 6572, "table_properties": {"data_size": 5248153, "index_size": 13930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6853, "raw_key_size": 60550, "raw_average_key_size": 22, "raw_value_size": 5196573, "raw_average_value_size": 1897, "num_data_blocks": 627, "num_entries": 2739, "num_filter_entries": 2739, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101111, "oldest_key_time": 1764101111, "file_creation_time": 1764101632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 38655 microseconds, and 24043 cpu microseconds.
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.095220) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5270005 bytes OK
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.095256) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.096877) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.096895) EVENT_LOG_v1 {"time_micros": 1764101632096889, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.096918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 6971611, prev total WAL file size 6971611, number of live WAL files 2.
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.098846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5146KB) 13(52KB) 8(1944B)]
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632098996, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5326098, "oldest_snapshot_seqno": -1}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2556 keys, 5282140 bytes, temperature: kUnknown
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632136716, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5282140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5260654, "index_size": 14017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6405, "raw_key_size": 58645, "raw_average_key_size": 22, "raw_value_size": 5210613, "raw_average_value_size": 2038, "num_data_blocks": 631, "num_entries": 2556, "num_filter_entries": 2556, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764101632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.137127) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5282140 bytes
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.138429) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.7 rd, 139.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.1, 0.0 +0.0 blob) out(5.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2844, records dropped: 288 output_compression: NoCompression
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.138464) EVENT_LOG_v1 {"time_micros": 1764101632138447, "job": 4, "event": "compaction_finished", "compaction_time_micros": 37865, "compaction_time_cpu_micros": 21976, "output_level": 6, "num_output_files": 1, "total_output_size": 5282140, "num_input_records": 2844, "num_output_records": 2556, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632140503, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632140641, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101632140728, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 20:13:52 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:13:52.098642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:13:52 compute-0 python3.9[134518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101630.9254892-347-138636780109221/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:52 compute-0 sudo[134516]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:52 compute-0 sudo[134669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zktygtldmhwzshziyorwlqfalflrzxlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101632.5267446-363-161770458228048/AnsiballZ_file.py'
Nov 25 20:13:52 compute-0 sudo[134669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v268: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:53 compute-0 python3.9[134671]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:13:53 compute-0 sudo[134669]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:53 compute-0 sudo[134821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiaonhfhzsrcllzgrgodjqugljtxiqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101633.2694592-371-255670993332158/AnsiballZ_stat.py'
Nov 25 20:13:53 compute-0 sudo[134821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:53 compute-0 python3.9[134823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:13:53 compute-0 sudo[134821]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:54 compute-0 ceph-mon[75144]: pgmap v268: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:54 compute-0 sudo[134944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agfcdefatqyztothadarmahnvybfcbhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101633.2694592-371-255670993332158/AnsiballZ_copy.py'
Nov 25 20:13:54 compute-0 sudo[134944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:13:54 compute-0 python3.9[134946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101633.2694592-371-255670993332158/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=66070892c6d01a4a91d50802bdc535b7b429f97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:13:54 compute-0 sudo[134944]: pam_unix(sudo:session): session closed for user root
Nov 25 20:13:54 compute-0 sshd-session[128841]: Connection closed by 192.168.122.30 port 51004
Nov 25 20:13:54 compute-0 sshd-session[128838]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:13:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v269: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:54 compute-0 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Nov 25 20:13:54 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 20:13:54 compute-0 systemd[1]: session-44.scope: Consumed 27.562s CPU time.
Nov 25 20:13:54 compute-0 systemd-logind[789]: Removed session 44.
Nov 25 20:13:56 compute-0 ceph-mon[75144]: pgmap v269: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v270: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:13:56
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'vms', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 25 20:13:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:13:58 compute-0 ceph-mon[75144]: pgmap v270: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:13:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v271: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:00 compute-0 ceph-mon[75144]: pgmap v271: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:00 compute-0 sshd-session[134971]: Accepted publickey for zuul from 192.168.122.30 port 56092 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:14:00 compute-0 systemd-logind[789]: New session 45 of user zuul.
Nov 25 20:14:00 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 25 20:14:00 compute-0 sshd-session[134971]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:14:00 compute-0 sudo[135124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acynsblqibjawrlntqknszndtfgunxnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101640.3211715-22-226961115840508/AnsiballZ_file.py'
Nov 25 20:14:00 compute-0 sudo[135124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v272: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:01 compute-0 python3.9[135126]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:01 compute-0 sudo[135124]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:01 compute-0 sudo[135276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yatlqotdhmgkmjkiqprnlkxidfxixvsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101641.3225832-34-210778214322541/AnsiballZ_stat.py'
Nov 25 20:14:01 compute-0 sudo[135276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:14:02 compute-0 ceph-mon[75144]: pgmap v272: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:02 compute-0 python3.9[135278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:02 compute-0 sudo[135276]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:02 compute-0 sudo[135399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvpdbfveaqycjcqedyiwqtdlnogglot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101641.3225832-34-210778214322541/AnsiballZ_copy.py'
Nov 25 20:14:02 compute-0 sudo[135399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v273: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:02 compute-0 python3.9[135401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764101641.3225832-34-210778214322541/.source.conf _original_basename=ceph.conf follow=False checksum=a627534733e3ce27204ff58366e481b790612699 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:02 compute-0 sudo[135399]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:03 compute-0 sudo[135551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixjgzppbcczqusoqpqfxuhanzklbbjme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101643.1622095-34-146636521111237/AnsiballZ_stat.py'
Nov 25 20:14:03 compute-0 sudo[135551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:03 compute-0 python3.9[135553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:03 compute-0 sudo[135551]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:04 compute-0 ceph-mon[75144]: pgmap v273: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:04 compute-0 sudo[135674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffobtrtpmpomypentyarocuqjuohdjwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101643.1622095-34-146636521111237/AnsiballZ_copy.py'
Nov 25 20:14:04 compute-0 sudo[135674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:04 compute-0 python3.9[135676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764101643.1622095-34-146636521111237/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=86ed7b0354b7c5dc128f5b75fa89f43fbe905230 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:04 compute-0 sudo[135674]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:04 compute-0 sshd-session[134974]: Connection closed by 192.168.122.30 port 56092
Nov 25 20:14:04 compute-0 sshd-session[134971]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:14:04 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 20:14:04 compute-0 systemd[1]: session-45.scope: Consumed 3.239s CPU time.
Nov 25 20:14:04 compute-0 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Nov 25 20:14:04 compute-0 systemd-logind[789]: Removed session 45.
Nov 25 20:14:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v274: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:06 compute-0 ceph-mon[75144]: pgmap v274: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v275: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:08 compute-0 ceph-mon[75144]: pgmap v275: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v276: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:10 compute-0 ceph-mon[75144]: pgmap v276: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:10 compute-0 sshd-session[135701]: Accepted publickey for zuul from 192.168.122.30 port 59318 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:14:10 compute-0 systemd-logind[789]: New session 46 of user zuul.
Nov 25 20:14:10 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 25 20:14:10 compute-0 sshd-session[135701]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:14:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v277: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:11 compute-0 python3.9[135854]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:14:12 compute-0 ceph-mon[75144]: pgmap v277: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:12 compute-0 sudo[136008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckgphndweijcqesrkenoafgpfwttotqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101651.95293-34-259380750348033/AnsiballZ_file.py'
Nov 25 20:14:12 compute-0 sudo[136008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:12 compute-0 python3.9[136010]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:14:12 compute-0 sudo[136008]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v278: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:13 compute-0 sudo[136160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfsonvwduywuqrlhhzjpqkjatsrqgekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101652.866792-34-114336085438355/AnsiballZ_file.py'
Nov 25 20:14:13 compute-0 sudo[136160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:13 compute-0 python3.9[136162]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:14:13 compute-0 sudo[136160]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:13 compute-0 sudo[136251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:13 compute-0 sudo[136251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:13 compute-0 sudo[136251]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:13 compute-0 sudo[136291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:14:13 compute-0 sudo[136291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:13 compute-0 sudo[136291]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:14 compute-0 sudo[136345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:14 compute-0 sudo[136345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:14 compute-0 sudo[136345]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:14 compute-0 sudo[136388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:14:14 compute-0 sudo[136388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:14 compute-0 ceph-mon[75144]: pgmap v278: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:14 compute-0 python3.9[136380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:14:14 compute-0 sudo[136388]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:14:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:14:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:14:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:14:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:14:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:14:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5c2e80e0-2a41-4add-861f-9fe83c35c0ce does not exist
Nov 25 20:14:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5398258f-2567-4aba-95f1-4d4f0e27eda7 does not exist
Nov 25 20:14:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 9e1d4a03-b5a8-464e-b53a-7ea1b6f91a0a does not exist
Nov 25 20:14:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:14:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:14:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:14:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:14:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:14:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:14:14 compute-0 sudo[136521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:14 compute-0 sudo[136521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:14 compute-0 sudo[136521]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:14 compute-0 sudo[136546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:14:14 compute-0 sudo[136546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:14 compute-0 sudo[136546]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:14 compute-0 sudo[136594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:14 compute-0 sudo[136594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:14 compute-0 sudo[136594]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v279: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:14 compute-0 sudo[136643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:14:14 compute-0 sudo[136643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:14 compute-0 sudo[136693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nazivhskyiiccfkvhpygrucinmfqzvge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101654.4971774-57-127621075450217/AnsiballZ_seboolean.py'
Nov 25 20:14:14 compute-0 sudo[136693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:14:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:14:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:14:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:14:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:14:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:14:15 compute-0 ceph-mon[75144]: pgmap v279: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:15 compute-0 python3.9[136696]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.297589526 +0000 UTC m=+0.048545477 container create 6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:14:15 compute-0 systemd[1]: Started libpod-conmon-6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e.scope.
Nov 25 20:14:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.276617317 +0000 UTC m=+0.027573288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.379658668 +0000 UTC m=+0.130614649 container init 6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.388868334 +0000 UTC m=+0.139824285 container start 6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.392752567 +0000 UTC m=+0.143708538 container attach 6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:14:15 compute-0 serene_shtern[136751]: 167 167
Nov 25 20:14:15 compute-0 systemd[1]: libpod-6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e.scope: Deactivated successfully.
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.395159412 +0000 UTC m=+0.146115363 container died 6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-070fc41e96a9de42f454c4e57c0877641e028ce638f8cd90ae790fc5a89900ac-merged.mount: Deactivated successfully.
Nov 25 20:14:15 compute-0 podman[136735]: 2025-11-25 20:14:15.435940151 +0000 UTC m=+0.186896102 container remove 6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:14:15 compute-0 systemd[1]: libpod-conmon-6e1b98f8cf5a1038e3572544d2f017ef9829aff03f948beb9dd6ec3d586f3a7e.scope: Deactivated successfully.
Nov 25 20:14:15 compute-0 podman[136774]: 2025-11-25 20:14:15.6261496 +0000 UTC m=+0.063240250 container create 4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:14:15 compute-0 systemd[1]: Started libpod-conmon-4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39.scope.
Nov 25 20:14:15 compute-0 podman[136774]: 2025-11-25 20:14:15.587040796 +0000 UTC m=+0.024131426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:14:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f713d39c3ba2ac9d7e3c5d0b3626a6b3b8779d9201cd6691ba30258b86d35c3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f713d39c3ba2ac9d7e3c5d0b3626a6b3b8779d9201cd6691ba30258b86d35c3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f713d39c3ba2ac9d7e3c5d0b3626a6b3b8779d9201cd6691ba30258b86d35c3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f713d39c3ba2ac9d7e3c5d0b3626a6b3b8779d9201cd6691ba30258b86d35c3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f713d39c3ba2ac9d7e3c5d0b3626a6b3b8779d9201cd6691ba30258b86d35c3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:15 compute-0 podman[136774]: 2025-11-25 20:14:15.726352475 +0000 UTC m=+0.163443115 container init 4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:14:15 compute-0 podman[136774]: 2025-11-25 20:14:15.741656694 +0000 UTC m=+0.178747304 container start 4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:14:15 compute-0 podman[136774]: 2025-11-25 20:14:15.74826103 +0000 UTC m=+0.185351660 container attach 4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:14:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:16 compute-0 sudo[136693]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:16 compute-0 distracted_robinson[136790]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:14:16 compute-0 distracted_robinson[136790]: --> relative data size: 1.0
Nov 25 20:14:16 compute-0 distracted_robinson[136790]: --> All data devices are unavailable
Nov 25 20:14:16 compute-0 systemd[1]: libpod-4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39.scope: Deactivated successfully.
Nov 25 20:14:16 compute-0 systemd[1]: libpod-4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39.scope: Consumed 1.078s CPU time.
Nov 25 20:14:16 compute-0 podman[136774]: 2025-11-25 20:14:16.916086835 +0000 UTC m=+1.353177455 container died 4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:14:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v280: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:16 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 20:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f713d39c3ba2ac9d7e3c5d0b3626a6b3b8779d9201cd6691ba30258b86d35c3d-merged.mount: Deactivated successfully.
Nov 25 20:14:17 compute-0 podman[136774]: 2025-11-25 20:14:17.00654542 +0000 UTC m=+1.443636070 container remove 4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:14:17 compute-0 systemd[1]: libpod-conmon-4807f35c9ba310ce59d1be8748d79662025f773d4d7d2a0f3f1bcf197375db39.scope: Deactivated successfully.
Nov 25 20:14:17 compute-0 sudo[136643]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:17 compute-0 sudo[136956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:17 compute-0 sudo[136956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:17 compute-0 sudo[136956]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:17 compute-0 sudo[137006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ripgvomjkginsqporeozgcqaxfadlavn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101656.76376-67-69792488523974/AnsiballZ_setup.py'
Nov 25 20:14:17 compute-0 sudo[137006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:17 compute-0 sudo[137010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:14:17 compute-0 sudo[137010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:17 compute-0 sudo[137010]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:17 compute-0 sudo[137035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:17 compute-0 sudo[137035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:17 compute-0 sudo[137035]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:17 compute-0 sudo[137060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:14:17 compute-0 sudo[137060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:17 compute-0 python3.9[137009]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:14:17 compute-0 sudo[137006]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.773397758 +0000 UTC m=+0.053966732 container create 3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:14:17 compute-0 systemd[1]: Started libpod-conmon-3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541.scope.
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.746650143 +0000 UTC m=+0.027219207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:14:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.875195016 +0000 UTC m=+0.155764090 container init 3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.887348181 +0000 UTC m=+0.167917155 container start 3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.893259119 +0000 UTC m=+0.173828153 container attach 3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:14:17 compute-0 elated_aryabhata[137149]: 167 167
Nov 25 20:14:17 compute-0 systemd[1]: libpod-3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541.scope: Deactivated successfully.
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.895888499 +0000 UTC m=+0.176457493 container died 3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-75acc2672658673e762b4a4433bcb1beb1c2dde1d0fdc19d7a1796c1c46f88fc-merged.mount: Deactivated successfully.
Nov 25 20:14:17 compute-0 podman[137133]: 2025-11-25 20:14:17.944371633 +0000 UTC m=+0.224940607 container remove 3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:14:17 compute-0 systemd[1]: libpod-conmon-3ccb17a023265534a53509fcc1524e0d56ba8bd930a54c958670ae30605f0541.scope: Deactivated successfully.
Nov 25 20:14:17 compute-0 ceph-mon[75144]: pgmap v280: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:18 compute-0 podman[137199]: 2025-11-25 20:14:18.159276122 +0000 UTC m=+0.071207993 container create 707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:14:18 compute-0 systemd[1]: Started libpod-conmon-707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b.scope.
Nov 25 20:14:18 compute-0 podman[137199]: 2025-11-25 20:14:18.135349583 +0000 UTC m=+0.047281544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:14:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f2dafd94920d91bcea0a9ed22ffefbe725787aa0ff4d892482cf2b109609db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f2dafd94920d91bcea0a9ed22ffefbe725787aa0ff4d892482cf2b109609db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f2dafd94920d91bcea0a9ed22ffefbe725787aa0ff4d892482cf2b109609db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f2dafd94920d91bcea0a9ed22ffefbe725787aa0ff4d892482cf2b109609db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:18 compute-0 sudo[137269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxvkxirncrezthmlhyydavhcmgsmzme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101656.76376-67-69792488523974/AnsiballZ_dnf.py'
Nov 25 20:14:18 compute-0 podman[137199]: 2025-11-25 20:14:18.257097965 +0000 UTC m=+0.169029856 container init 707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:14:18 compute-0 sudo[137269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:18 compute-0 podman[137199]: 2025-11-25 20:14:18.265690994 +0000 UTC m=+0.177622865 container start 707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shannon, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:14:18 compute-0 podman[137199]: 2025-11-25 20:14:18.269347701 +0000 UTC m=+0.181279572 container attach 707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:14:18 compute-0 python3.9[137272]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:14:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v281: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:19 compute-0 nice_shannon[137264]: {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:     "0": [
Nov 25 20:14:19 compute-0 nice_shannon[137264]:         {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "devices": [
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "/dev/loop3"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             ],
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_name": "ceph_lv0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_size": "21470642176",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "name": "ceph_lv0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "tags": {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cluster_name": "ceph",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.crush_device_class": "",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.encrypted": "0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osd_id": "0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.type": "block",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.vdo": "0"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             },
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "type": "block",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "vg_name": "ceph_vg0"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:         }
Nov 25 20:14:19 compute-0 nice_shannon[137264]:     ],
Nov 25 20:14:19 compute-0 nice_shannon[137264]:     "1": [
Nov 25 20:14:19 compute-0 nice_shannon[137264]:         {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "devices": [
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "/dev/loop4"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             ],
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_name": "ceph_lv1",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_size": "21470642176",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "name": "ceph_lv1",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "tags": {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cluster_name": "ceph",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.crush_device_class": "",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.encrypted": "0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osd_id": "1",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.type": "block",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.vdo": "0"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             },
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "type": "block",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "vg_name": "ceph_vg1"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:         }
Nov 25 20:14:19 compute-0 nice_shannon[137264]:     ],
Nov 25 20:14:19 compute-0 nice_shannon[137264]:     "2": [
Nov 25 20:14:19 compute-0 nice_shannon[137264]:         {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "devices": [
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "/dev/loop5"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             ],
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_name": "ceph_lv2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_size": "21470642176",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "name": "ceph_lv2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "tags": {
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.cluster_name": "ceph",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.crush_device_class": "",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.encrypted": "0",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osd_id": "2",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.type": "block",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:                 "ceph.vdo": "0"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             },
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "type": "block",
Nov 25 20:14:19 compute-0 nice_shannon[137264]:             "vg_name": "ceph_vg2"
Nov 25 20:14:19 compute-0 nice_shannon[137264]:         }
Nov 25 20:14:19 compute-0 nice_shannon[137264]:     ]
Nov 25 20:14:19 compute-0 nice_shannon[137264]: }
Nov 25 20:14:19 compute-0 systemd[1]: libpod-707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b.scope: Deactivated successfully.
Nov 25 20:14:19 compute-0 podman[137199]: 2025-11-25 20:14:19.068506981 +0000 UTC m=+0.980438922 container died 707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5f2dafd94920d91bcea0a9ed22ffefbe725787aa0ff4d892482cf2b109609db-merged.mount: Deactivated successfully.
Nov 25 20:14:19 compute-0 podman[137199]: 2025-11-25 20:14:19.140101393 +0000 UTC m=+1.052033264 container remove 707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:14:19 compute-0 systemd[1]: libpod-conmon-707bc0bcc15ebd8dfdf728928d81b9a804390f41d6bd6c837e83989abfa4b43b.scope: Deactivated successfully.
Nov 25 20:14:19 compute-0 sudo[137060]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:19 compute-0 sudo[137293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:19 compute-0 sudo[137293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:19 compute-0 sudo[137293]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:19 compute-0 sudo[137318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:14:19 compute-0 sudo[137318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:19 compute-0 sudo[137318]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:19 compute-0 sudo[137343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:19 compute-0 sudo[137343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:19 compute-0 sudo[137343]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:19 compute-0 sudo[137368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:14:19 compute-0 sudo[137368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:19 compute-0 sudo[137269]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:19 compute-0 podman[137433]: 2025-11-25 20:14:19.876765865 +0000 UTC m=+0.042706662 container create b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:14:19 compute-0 systemd[1]: Started libpod-conmon-b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec.scope.
Nov 25 20:14:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:14:19 compute-0 podman[137433]: 2025-11-25 20:14:19.857045908 +0000 UTC m=+0.022986725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:14:19 compute-0 podman[137433]: 2025-11-25 20:14:19.958575409 +0000 UTC m=+0.124516226 container init b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:14:19 compute-0 podman[137433]: 2025-11-25 20:14:19.966653535 +0000 UTC m=+0.132594332 container start b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:14:19 compute-0 podman[137433]: 2025-11-25 20:14:19.969971803 +0000 UTC m=+0.135912600 container attach b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:14:19 compute-0 zealous_hertz[137472]: 167 167
Nov 25 20:14:19 compute-0 systemd[1]: libpod-b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec.scope: Deactivated successfully.
Nov 25 20:14:19 compute-0 podman[137433]: 2025-11-25 20:14:19.971928866 +0000 UTC m=+0.137869703 container died b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:14:19 compute-0 ceph-mon[75144]: pgmap v281: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-11e4218843b3548421da2a6451de1e29e97fe159f62c5e14e3da3627286d2a80-merged.mount: Deactivated successfully.
Nov 25 20:14:20 compute-0 podman[137433]: 2025-11-25 20:14:20.015491538 +0000 UTC m=+0.181432335 container remove b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:14:20 compute-0 systemd[1]: libpod-conmon-b3a352f15b8dd7700824616fc52ed17229a4ec754ee16c671ce8a0aef93a5fec.scope: Deactivated successfully.
Nov 25 20:14:20 compute-0 podman[137549]: 2025-11-25 20:14:20.209695605 +0000 UTC m=+0.048592829 container create a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:14:20 compute-0 systemd[1]: Started libpod-conmon-a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018.scope.
Nov 25 20:14:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dd74fefed5bf4e4e5945ff9570f1c55bc6551f38871d7f42c21fc3f94999dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dd74fefed5bf4e4e5945ff9570f1c55bc6551f38871d7f42c21fc3f94999dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dd74fefed5bf4e4e5945ff9570f1c55bc6551f38871d7f42c21fc3f94999dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dd74fefed5bf4e4e5945ff9570f1c55bc6551f38871d7f42c21fc3f94999dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:14:20 compute-0 podman[137549]: 2025-11-25 20:14:20.18895402 +0000 UTC m=+0.027851274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:14:20 compute-0 podman[137549]: 2025-11-25 20:14:20.286039003 +0000 UTC m=+0.124936247 container init a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:14:20 compute-0 podman[137549]: 2025-11-25 20:14:20.291558941 +0000 UTC m=+0.130456165 container start a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:14:20 compute-0 podman[137549]: 2025-11-25 20:14:20.295060634 +0000 UTC m=+0.133957858 container attach a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:14:20 compute-0 sudo[137644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdgpltvlhjeojfnghaujctzvcqfvqad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101660.0444236-79-147627284845544/AnsiballZ_systemd.py'
Nov 25 20:14:20 compute-0 sudo[137644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v282: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:20 compute-0 python3.9[137646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:14:21 compute-0 sudo[137644]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]: {
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "osd_id": 2,
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "type": "bluestore"
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:     },
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "osd_id": 1,
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "type": "bluestore"
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:     },
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "osd_id": 0,
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:         "type": "bluestore"
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]:     }
Nov 25 20:14:21 compute-0 compassionate_northcutt[137566]: }
Nov 25 20:14:21 compute-0 systemd[1]: libpod-a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018.scope: Deactivated successfully.
Nov 25 20:14:21 compute-0 podman[137549]: 2025-11-25 20:14:21.295470168 +0000 UTC m=+1.134367412 container died a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:14:21 compute-0 systemd[1]: libpod-a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018.scope: Consumed 1.004s CPU time.
Nov 25 20:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-56dd74fefed5bf4e4e5945ff9570f1c55bc6551f38871d7f42c21fc3f94999dc-merged.mount: Deactivated successfully.
Nov 25 20:14:21 compute-0 podman[137549]: 2025-11-25 20:14:21.369095245 +0000 UTC m=+1.207992509 container remove a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:14:21 compute-0 systemd[1]: libpod-conmon-a94b4ad9b4de8065cfc702ed49b3b48950fb44b48b97c59e4341a4b25179d018.scope: Deactivated successfully.
Nov 25 20:14:21 compute-0 sudo[137368]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:14:21 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:14:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:14:21 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:14:21 compute-0 sudo[137767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:14:21 compute-0 sudo[137767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:21 compute-0 sudo[137767]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:21 compute-0 sudo[137792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:14:21 compute-0 sudo[137792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:14:21 compute-0 sudo[137792]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:21 compute-0 sudo[137890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zisdwddvbuearcjwbwnlicmhfktcgwbc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101661.2618158-87-172223412177753/AnsiballZ_edpm_nftables_snippet.py'
Nov 25 20:14:21 compute-0 sudo[137890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:22 compute-0 ceph-mon[75144]: pgmap v282: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:14:22 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:14:22 compute-0 python3[137892]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 20:14:22 compute-0 sudo[137890]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:22 compute-0 sudo[138042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbrnjmrqkolwyrpkxwikhvuytvtgigtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101662.36639-96-218305669028450/AnsiballZ_file.py'
Nov 25 20:14:22 compute-0 sudo[138042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v283: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:22 compute-0 python3.9[138044]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:22 compute-0 sudo[138042]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:23 compute-0 sudo[138194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhwqlcimpssutgihmqgztwuulaaevwet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101663.1968825-104-119906020242166/AnsiballZ_stat.py'
Nov 25 20:14:23 compute-0 sudo[138194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:23 compute-0 python3.9[138196]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:23 compute-0 sudo[138194]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:24 compute-0 ceph-mon[75144]: pgmap v283: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:24 compute-0 sudo[138272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyitbkmrzpyajdpfpwqjvwwpuqhigryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101663.1968825-104-119906020242166/AnsiballZ_file.py'
Nov 25 20:14:24 compute-0 sudo[138272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:24 compute-0 python3.9[138274]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:24 compute-0 sudo[138272]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v284: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:25 compute-0 sudo[138424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkzeuloyfkwnqqypaubapcgbxfsddaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101664.643573-116-205184350984951/AnsiballZ_stat.py'
Nov 25 20:14:25 compute-0 sudo[138424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:25 compute-0 python3.9[138426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:25 compute-0 sudo[138424]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:25 compute-0 sudo[138502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwbyjvdvseiuikfqutkgwpsjbetgnhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101664.643573-116-205184350984951/AnsiballZ_file.py'
Nov 25 20:14:25 compute-0 sudo[138502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:25 compute-0 python3.9[138504]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yd78__bb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:25 compute-0 sudo[138502]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:26 compute-0 ceph-mon[75144]: pgmap v284: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:26 compute-0 sudo[138654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohovqmlslvemsdwlrvxssjsynltiylqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101665.976873-128-162498004055810/AnsiballZ_stat.py'
Nov 25 20:14:26 compute-0 sudo[138654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:26 compute-0 python3.9[138656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:26 compute-0 sudo[138654]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:14:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v285: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:26 compute-0 sudo[138732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkulrnwfhhkivqfzrtphgizokmpywrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101665.976873-128-162498004055810/AnsiballZ_file.py'
Nov 25 20:14:26 compute-0 sudo[138732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:27 compute-0 python3.9[138734]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:27 compute-0 sudo[138732]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:27 compute-0 sudo[138884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fergratgdybivaabafyyqsvuwtupwsdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101667.4153814-141-48257725614085/AnsiballZ_command.py'
Nov 25 20:14:27 compute-0 sudo[138884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:28 compute-0 ceph-mon[75144]: pgmap v285: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:28 compute-0 python3.9[138886]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:28 compute-0 sudo[138884]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v286: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:28 compute-0 sudo[139037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcezbtcripziznjcwtxtsnptsoldazyy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101668.4202206-149-125791964472664/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 20:14:28 compute-0 sudo[139037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:29 compute-0 python3[139039]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:14:29 compute-0 sudo[139037]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:30 compute-0 ceph-mon[75144]: pgmap v286: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:30 compute-0 sudo[139189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilsoxtiroxwswrviiqnoyfgacwaxvnis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101669.3512669-157-226582175987554/AnsiballZ_stat.py'
Nov 25 20:14:30 compute-0 sudo[139189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:30 compute-0 python3.9[139191]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:30 compute-0 sudo[139189]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:30 compute-0 sudo[139314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwoygrlddjkyeyabsdmdyuuqvqjbpwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101669.3512669-157-226582175987554/AnsiballZ_copy.py'
Nov 25 20:14:30 compute-0 sudo[139314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v287: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:31 compute-0 python3.9[139316]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101669.3512669-157-226582175987554/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:31 compute-0 sudo[139314]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:31 compute-0 sudo[139466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwwczwlgsjhfrhltnkvgkrqduaqclzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101671.323483-172-176881932205686/AnsiballZ_stat.py'
Nov 25 20:14:31 compute-0 sudo[139466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:31 compute-0 python3.9[139468]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:32 compute-0 sudo[139466]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:32 compute-0 ceph-mon[75144]: pgmap v287: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:32 compute-0 sudo[139591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woscncqmnknouuxnzmhpdlqixovshmjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101671.323483-172-176881932205686/AnsiballZ_copy.py'
Nov 25 20:14:32 compute-0 sudo[139591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:32 compute-0 python3.9[139593]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101671.323483-172-176881932205686/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:32 compute-0 sudo[139591]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v288: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:33 compute-0 sudo[139743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnmpbzipiklhvwmaklbsstwthiqzfrvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101672.961996-187-31567082342030/AnsiballZ_stat.py'
Nov 25 20:14:33 compute-0 sudo[139743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:33 compute-0 python3.9[139745]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:33 compute-0 sudo[139743]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:34 compute-0 ceph-mon[75144]: pgmap v288: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:34 compute-0 sudo[139868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnaqflehpvfhxauskkjaeyyqrugcrcpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101672.961996-187-31567082342030/AnsiballZ_copy.py'
Nov 25 20:14:34 compute-0 sudo[139868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:34 compute-0 python3.9[139870]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101672.961996-187-31567082342030/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:34 compute-0 sudo[139868]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:34 compute-0 sudo[140020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkcrhmdsiuywdydufbxodevewgsreass ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101674.5159366-202-221561887780374/AnsiballZ_stat.py'
Nov 25 20:14:34 compute-0 sudo[140020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v289: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:35 compute-0 python3.9[140022]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:35 compute-0 sudo[140020]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:35 compute-0 sudo[140145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jounwdnkogxkjxcorohfsotbwrgpnvtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101674.5159366-202-221561887780374/AnsiballZ_copy.py'
Nov 25 20:14:35 compute-0 sudo[140145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:35 compute-0 python3.9[140147]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101674.5159366-202-221561887780374/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:35 compute-0 sudo[140145]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:36 compute-0 ceph-mon[75144]: pgmap v289: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:36 compute-0 sudo[140297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskvstiraekajkgnoxhecahydycxndmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101676.034764-217-226149620490645/AnsiballZ_stat.py'
Nov 25 20:14:36 compute-0 sudo[140297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:36 compute-0 python3.9[140299]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:36 compute-0 sudo[140297]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v290: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:37 compute-0 sudo[140422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-optwovjgmcbjuwrsuojxteogcgilqeaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101676.034764-217-226149620490645/AnsiballZ_copy.py'
Nov 25 20:14:37 compute-0 sudo[140422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:37 compute-0 python3.9[140424]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764101676.034764-217-226149620490645/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:37 compute-0 sudo[140422]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:38 compute-0 ceph-mon[75144]: pgmap v290: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:38 compute-0 sudo[140574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evewfpcxerzoojfpjvuvfejamnxfaxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101677.752308-232-76642677222926/AnsiballZ_file.py'
Nov 25 20:14:38 compute-0 sudo[140574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:38 compute-0 python3.9[140576]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:38 compute-0 sudo[140574]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v291: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:39 compute-0 sudo[140726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdnsiiddipcqlpfanpmguxdourrpdep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101678.596709-240-117350686943754/AnsiballZ_command.py'
Nov 25 20:14:39 compute-0 sudo[140726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:39 compute-0 python3.9[140728]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:39 compute-0 sudo[140726]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:40 compute-0 ceph-mon[75144]: pgmap v291: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:40 compute-0 sudo[140881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpbxwxxuuxukrbwlamexbcafepdaasgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101679.5300052-248-94811909989900/AnsiballZ_blockinfile.py'
Nov 25 20:14:40 compute-0 sudo[140881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:40 compute-0 python3.9[140883]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:40 compute-0 sudo[140881]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v292: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:41 compute-0 sudo[141033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcfvjnpadzvzgxpadjijhpqgnsocppsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101680.6454797-257-125362276680962/AnsiballZ_command.py'
Nov 25 20:14:41 compute-0 sudo[141033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:41 compute-0 python3.9[141035]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:41 compute-0 sudo[141033]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:41 compute-0 sudo[141186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmlnlwcjtkrcgxpnguhlnletjdpsjkrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101681.5467935-265-164171988013264/AnsiballZ_stat.py'
Nov 25 20:14:41 compute-0 sudo[141186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:42 compute-0 ceph-mon[75144]: pgmap v292: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:42 compute-0 python3.9[141188]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:14:42 compute-0 sudo[141186]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:42 compute-0 sudo[141340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemjiobrgonbxthmpbnlvtghlhxiysvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101682.3983655-273-233930402618486/AnsiballZ_command.py'
Nov 25 20:14:42 compute-0 sudo[141340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v293: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:43 compute-0 python3.9[141342]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:43 compute-0 sudo[141340]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:43 compute-0 sudo[141495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzrphkceulblyyntybpqlsdqqgzbsuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101683.2942839-281-1804289201613/AnsiballZ_file.py'
Nov 25 20:14:43 compute-0 sudo[141495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:43 compute-0 python3.9[141497]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:43 compute-0 sudo[141495]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:44 compute-0 ceph-mon[75144]: pgmap v293: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v294: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:45 compute-0 python3.9[141647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:14:46 compute-0 ceph-mon[75144]: pgmap v294: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.215949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101686216044, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 662, "num_deletes": 250, "total_data_size": 541831, "memory_usage": 554560, "flush_reason": "Manual Compaction"}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101686222949, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 360990, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6573, "largest_seqno": 7234, "table_properties": {"data_size": 357972, "index_size": 926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7584, "raw_average_key_size": 19, "raw_value_size": 351755, "raw_average_value_size": 901, "num_data_blocks": 43, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101633, "oldest_key_time": 1764101633, "file_creation_time": 1764101686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7043 microseconds, and 3097 cpu microseconds.
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.223003) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 360990 bytes OK
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.223027) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.224453) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.224472) EVENT_LOG_v1 {"time_micros": 1764101686224466, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.224496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 538339, prev total WAL file size 538339, number of live WAL files 2.
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.225244) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(352KB)], [20(5158KB)]
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101686225303, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 5643130, "oldest_snapshot_seqno": -1}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2456 keys, 3924969 bytes, temperature: kUnknown
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101686256626, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 3924969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 3907468, "index_size": 10288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6149, "raw_key_size": 56975, "raw_average_key_size": 23, "raw_value_size": 3862377, "raw_average_value_size": 1572, "num_data_blocks": 468, "num_entries": 2456, "num_filter_entries": 2456, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764101686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.257120) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 3924969 bytes
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.258760) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.9 rd, 124.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 5.0 +0.0 blob) out(3.7 +0.0 blob), read-write-amplify(26.5) write-amplify(10.9) OK, records in: 2946, records dropped: 490 output_compression: NoCompression
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.258836) EVENT_LOG_v1 {"time_micros": 1764101686258778, "job": 6, "event": "compaction_finished", "compaction_time_micros": 31549, "compaction_time_cpu_micros": 24872, "output_level": 6, "num_output_files": 1, "total_output_size": 3924969, "num_input_records": 2946, "num_output_records": 2456, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101686259349, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101686261334, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.225145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.261488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.261497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.261501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.261504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:14:46 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:14:46.261516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:14:46 compute-0 sudo[141798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladlwhcqdkkzfwxrnunmconvnnjbzjoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101685.9490192-321-168660851797968/AnsiballZ_command.py'
Nov 25 20:14:46 compute-0 sudo[141798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:46 compute-0 python3.9[141800]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:46 compute-0 ovs-vsctl[141801]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 20:14:46 compute-0 sudo[141798]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v295: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:47 compute-0 ceph-mon[75144]: pgmap v295: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:47 compute-0 sudo[141951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygqcjjpnzpyhtyjklnfuhryppomhserk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101686.8831496-330-75696735799684/AnsiballZ_command.py'
Nov 25 20:14:47 compute-0 sudo[141951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:47 compute-0 python3.9[141953]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:47 compute-0 sudo[141951]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:48 compute-0 sudo[142106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnxvxxgmuxkjlyoiwkyiucskmprkoyio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101687.8237069-338-141641646788067/AnsiballZ_command.py'
Nov 25 20:14:48 compute-0 sudo[142106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:48 compute-0 python3.9[142108]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:14:48 compute-0 ovs-vsctl[142109]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 20:14:48 compute-0 sudo[142106]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v296: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:49 compute-0 python3.9[142259]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:14:49 compute-0 sudo[142411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myyoeqochqrdkcrrrzampnovjytckfsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101689.5825193-355-49082068876859/AnsiballZ_file.py'
Nov 25 20:14:49 compute-0 ceph-mon[75144]: pgmap v296: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:49 compute-0 sudo[142411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:50 compute-0 python3.9[142413]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:14:50 compute-0 sudo[142411]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:50 compute-0 sudo[142563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjoudkyrponqexkpkgadhxkifhhvdvrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101690.4254162-363-137083049016942/AnsiballZ_stat.py'
Nov 25 20:14:50 compute-0 sudo[142563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v297: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:51 compute-0 python3.9[142565]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:51 compute-0 sudo[142563]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:51 compute-0 sudo[142641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnuwxbyzayasdqyhbrllqhhkckgoimla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101690.4254162-363-137083049016942/AnsiballZ_file.py'
Nov 25 20:14:51 compute-0 sudo[142641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:51 compute-0 python3.9[142643]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:14:51 compute-0 sudo[142641]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:51 compute-0 ceph-mon[75144]: pgmap v297: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:52 compute-0 sudo[142793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zepznrlrlgjywapctyeofbvbxtcyblvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101691.8009396-363-145868902839326/AnsiballZ_stat.py'
Nov 25 20:14:52 compute-0 sudo[142793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:52 compute-0 python3.9[142795]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:52 compute-0 sudo[142793]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:52 compute-0 sudo[142871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrfvebqgtejnrzmmcpctmrdnzfwhysvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101691.8009396-363-145868902839326/AnsiballZ_file.py'
Nov 25 20:14:52 compute-0 sudo[142871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v298: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:52 compute-0 python3.9[142873]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:14:53 compute-0 sudo[142871]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:53 compute-0 sudo[143023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwbnvzopddemxigdjmhzmeqfrxqzdps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101693.227563-386-20505966855500/AnsiballZ_file.py'
Nov 25 20:14:53 compute-0 sudo[143023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:53 compute-0 python3.9[143025]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:53 compute-0 sudo[143023]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:54 compute-0 ceph-mon[75144]: pgmap v298: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:54 compute-0 sudo[143175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogsxfzrcdhxvfirwjhaviuksudubgvxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101694.0992928-394-43337194955148/AnsiballZ_stat.py'
Nov 25 20:14:54 compute-0 sudo[143175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:54 compute-0 python3.9[143177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:54 compute-0 sudo[143175]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v299: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:55 compute-0 sudo[143253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakxfgbqoymigfoesgxtvgijhhhlwmnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101694.0992928-394-43337194955148/AnsiballZ_file.py'
Nov 25 20:14:55 compute-0 sudo[143253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:55 compute-0 python3.9[143255]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:55 compute-0 sudo[143253]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:55 compute-0 sudo[143405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlseyznbmkojdfsubbwnqlednscwkrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101695.5552123-406-145477746975919/AnsiballZ_stat.py'
Nov 25 20:14:55 compute-0 sudo[143405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:56 compute-0 ceph-mon[75144]: pgmap v299: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:56 compute-0 python3.9[143407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:14:56 compute-0 sudo[143405]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:56 compute-0 sudo[143483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwnadcqamuavxmnwfxwrizpzerrqzze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101695.5552123-406-145477746975919/AnsiballZ_file.py'
Nov 25 20:14:56 compute-0 sudo[143483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:56 compute-0 python3.9[143485]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:56 compute-0 sudo[143483]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v300: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:14:56
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms']
Nov 25 20:14:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:14:57 compute-0 sudo[143635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tovnyteieweemyaltgoorttqwrwpaelq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101697.0189633-418-140249481492864/AnsiballZ_systemd.py'
Nov 25 20:14:57 compute-0 sudo[143635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:57 compute-0 python3.9[143637]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:14:57 compute-0 systemd[1]: Reloading.
Nov 25 20:14:57 compute-0 systemd-sysv-generator[143668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:14:57 compute-0 systemd-rc-local-generator[143665]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:14:58 compute-0 ceph-mon[75144]: pgmap v300: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:58 compute-0 sudo[143635]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:58 compute-0 sudo[143824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghyaacxhmhmcsxmwqmtzukykkchzocs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101698.4245698-426-96647148375431/AnsiballZ_stat.py'
Nov 25 20:14:58 compute-0 sudo[143824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v301: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:14:59 compute-0 python3.9[143826]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:14:59 compute-0 sudo[143824]: pam_unix(sudo:session): session closed for user root
Nov 25 20:14:59 compute-0 sudo[143902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllnkatbsqcilkritgnvjhvxihszzrro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101698.4245698-426-96647148375431/AnsiballZ_file.py'
Nov 25 20:14:59 compute-0 sudo[143902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:14:59 compute-0 python3.9[143904]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:14:59 compute-0 sudo[143902]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:00 compute-0 ceph-mon[75144]: pgmap v301: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:00 compute-0 sudo[144054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gytpjllxrjcsyqdsyapydcflsamsyeaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101699.9205804-438-232650992205174/AnsiballZ_stat.py'
Nov 25 20:15:00 compute-0 sudo[144054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:00 compute-0 python3.9[144056]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:00 compute-0 sudo[144054]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:00 compute-0 sudo[144132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcxmraofxaqliqgsgnosfflbcpdmkcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101699.9205804-438-232650992205174/AnsiballZ_file.py'
Nov 25 20:15:00 compute-0 sudo[144132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v302: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:01 compute-0 python3.9[144134]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:15:01 compute-0 sudo[144132]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:01 compute-0 sudo[144284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biytxwuhnhktsfjrvgqanvjrgqrjjdvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101701.3580573-450-109450495498001/AnsiballZ_systemd.py'
Nov 25 20:15:01 compute-0 sudo[144284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:02 compute-0 ceph-mon[75144]: pgmap v302: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:02 compute-0 python3.9[144286]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:15:02 compute-0 systemd[1]: Reloading.
Nov 25 20:15:02 compute-0 systemd-rc-local-generator[144313]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:15:02 compute-0 systemd-sysv-generator[144316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:15:02 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 20:15:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 20:15:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 20:15:02 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 20:15:02 compute-0 sudo[144284]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v303: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:03 compute-0 sudo[144476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrqzyjoiqwmfzsqhrxkvftkvmprlkiip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101702.8485363-460-273655335661795/AnsiballZ_file.py'
Nov 25 20:15:03 compute-0 sudo[144476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:03 compute-0 python3.9[144478]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:03 compute-0 sudo[144476]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:04 compute-0 sudo[144628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoejiwzpjsyjqjclxzrrrpjlnonpebpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101703.6531549-468-57682752210974/AnsiballZ_stat.py'
Nov 25 20:15:04 compute-0 sudo[144628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:04 compute-0 ceph-mon[75144]: pgmap v303: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:04 compute-0 python3.9[144630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:04 compute-0 sudo[144628]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:04 compute-0 sudo[144751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxtbveitzzgrveuafzxeqsyezjhdpqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101703.6531549-468-57682752210974/AnsiballZ_copy.py'
Nov 25 20:15:04 compute-0 sudo[144751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:04 compute-0 python3.9[144753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101703.6531549-468-57682752210974/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:04 compute-0 sudo[144751]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v304: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:05 compute-0 sudo[144903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdtrkvuwpiltvmnrvznvqbvqovfriecp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101705.256634-485-91651377562585/AnsiballZ_file.py'
Nov 25 20:15:05 compute-0 sudo[144903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:05 compute-0 python3.9[144905]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:05 compute-0 sudo[144903]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:06 compute-0 ceph-mon[75144]: pgmap v304: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:06 compute-0 sudo[145055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ectzwqkfotengcomsaynohyfryslbqhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101706.141619-493-279186258252959/AnsiballZ_stat.py'
Nov 25 20:15:06 compute-0 sudo[145055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:06 compute-0 python3.9[145057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:06 compute-0 sudo[145055]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v305: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:07 compute-0 sudo[145178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogavrhuqrsmcwfjpjfbwiqadzdiljjzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101706.141619-493-279186258252959/AnsiballZ_copy.py'
Nov 25 20:15:07 compute-0 sudo[145178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:07 compute-0 python3.9[145180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764101706.141619-493-279186258252959/.source.json _original_basename=.rl4btsnw follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:15:07 compute-0 sudo[145178]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:08 compute-0 ceph-mon[75144]: pgmap v305: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:08 compute-0 sudo[145330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akidvtaopuzcqsynlbsjjvxqduiftotf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101707.8383834-508-232943831932829/AnsiballZ_file.py'
Nov 25 20:15:08 compute-0 sudo[145330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:08 compute-0 python3.9[145332]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:15:08 compute-0 sudo[145330]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v306: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:09 compute-0 sudo[145482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idgrfvkigpiznhydvvxpdoaccrfoqqnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101708.6946063-516-114597424404590/AnsiballZ_stat.py'
Nov 25 20:15:09 compute-0 sudo[145482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:09 compute-0 sudo[145482]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:09 compute-0 sudo[145605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxvniiejbwcgbnqbimdxfsjgmrmbyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101708.6946063-516-114597424404590/AnsiballZ_copy.py'
Nov 25 20:15:09 compute-0 sudo[145605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:09 compute-0 sudo[145605]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:10 compute-0 ceph-mon[75144]: pgmap v306: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:10 compute-0 sudo[145757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyslsnmizayvracacfwbgvypcsksxseb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101710.312262-533-257698593602270/AnsiballZ_container_config_data.py'
Nov 25 20:15:10 compute-0 sudo[145757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v307: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:11 compute-0 python3.9[145759]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 20:15:11 compute-0 sudo[145757]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:15:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1696 writes, 7274 keys, 1696 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 1696 writes, 1696 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1696 writes, 7274 keys, 1696 commit groups, 1.0 writes per commit group, ingest: 7.39 MB, 0.01 MB/s
                                           Interval WAL: 1696 writes, 1696 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    111.1      0.05              0.03         3    0.016       0      0       0.0       0.0
                                             L6      1/0    3.74 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    150.7    126.5      0.07              0.05         2    0.035    5790    778       0.0       0.0
                                            Sum      1/0    3.74 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     88.5    120.2      0.12              0.07         5    0.024    5790    778       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     90.9    122.9      0.12              0.07         4    0.029    5790    778       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    150.7    126.5      0.07              0.05         2    0.035    5790    778       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.5      0.05              0.03         2    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.005, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.01 GB write, 0.02 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds
                                           Interval compaction: 0.01 GB write, 0.02 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585aba031f0#2 capacity: 308.00 MB usage: 537.16 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000127 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(34,468.81 KB,0.148644%) FilterBlock(6,22.67 KB,0.00718847%) IndexBlock(6,45.67 KB,0.014481%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:15:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:11 compute-0 sudo[145909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhtwwdfyzjigvqvhbezapkfgvhpejjgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101711.2921515-542-163227517522960/AnsiballZ_container_config_hash.py'
Nov 25 20:15:11 compute-0 sudo[145909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:12 compute-0 python3.9[145911]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:15:12 compute-0 sudo[145909]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:12 compute-0 ceph-mon[75144]: pgmap v307: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:12 compute-0 sudo[146061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpkodindskdrxsokmflknbugiredshpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101712.385373-551-39328928277233/AnsiballZ_podman_container_info.py'
Nov 25 20:15:12 compute-0 sudo[146061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v308: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:13 compute-0 python3.9[146063]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 20:15:13 compute-0 sudo[146061]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:14 compute-0 ceph-mon[75144]: pgmap v308: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:14 compute-0 sudo[146238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhtkjjwzkafxquhuwmsthtjphylduqfw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101714.0810263-564-183896039971793/AnsiballZ_edpm_container_manage.py'
Nov 25 20:15:14 compute-0 sudo[146238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v309: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:14 compute-0 python3[146240]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:15:16 compute-0 ceph-mon[75144]: pgmap v309: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v310: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:18 compute-0 ceph-mon[75144]: pgmap v310: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v311: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:19 compute-0 ceph-mon[75144]: pgmap v311: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:20 compute-0 podman[146253]: 2025-11-25 20:15:20.192197718 +0000 UTC m=+5.140053610 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 20:15:20 compute-0 podman[146372]: 2025-11-25 20:15:20.427368807 +0000 UTC m=+0.073488766 container create eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:15:20 compute-0 podman[146372]: 2025-11-25 20:15:20.390527177 +0000 UTC m=+0.036647176 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 20:15:20 compute-0 python3[146240]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 20:15:20 compute-0 sudo[146238]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v312: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:21 compute-0 sudo[146560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wofpwrevxkzfrfmxuadwnliinvmczptq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101720.8772585-572-230467884589166/AnsiballZ_stat.py'
Nov 25 20:15:21 compute-0 sudo[146560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:21 compute-0 python3.9[146562]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:15:21 compute-0 sudo[146560]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:21 compute-0 sudo[146589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:21 compute-0 sudo[146589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:21 compute-0 sudo[146589]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:21 compute-0 sudo[146614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:15:21 compute-0 sudo[146614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:21 compute-0 sudo[146614]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:21 compute-0 sudo[146645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:21 compute-0 sudo[146645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:21 compute-0 sudo[146645]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:21 compute-0 sudo[146707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:15:21 compute-0 sudo[146707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:22 compute-0 ceph-mon[75144]: pgmap v312: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:22 compute-0 sudo[146829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koetvskuqkxhgrgjrkvmmqwhehdyuijk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101721.84315-581-170468358831795/AnsiballZ_file.py'
Nov 25 20:15:22 compute-0 sudo[146829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:22 compute-0 python3.9[146831]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:15:22 compute-0 sudo[146829]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:22 compute-0 sudo[146707]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:15:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:15:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:15:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:15:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:15:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:15:22 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 66c8f654-d81f-4162-8505-a28d70c959cf does not exist
Nov 25 20:15:22 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 05952404-13cb-49bf-a392-7af3088c0b72 does not exist
Nov 25 20:15:22 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 70d09233-2bc2-453b-b93c-8a23d8462283 does not exist
Nov 25 20:15:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:15:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:15:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:15:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:15:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:15:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:15:22 compute-0 sudo[146872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:22 compute-0 sudo[146872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:22 compute-0 sudo[146872]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:22 compute-0 sudo[146921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:15:22 compute-0 sudo[146921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:22 compute-0 sudo[146921]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:22 compute-0 sudo[146972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljxeaiobptaojaddbxluinfiktxwcoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101721.84315-581-170468358831795/AnsiballZ_stat.py'
Nov 25 20:15:22 compute-0 sudo[146972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:22 compute-0 sudo[146973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:22 compute-0 sudo[146973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:22 compute-0 sudo[146973]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:22 compute-0 sudo[147000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:15:22 compute-0 sudo[147000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v313: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:23 compute-0 python3.9[146979]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:15:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:15:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:15:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:15:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:15:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:15:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:15:23 compute-0 sudo[146972]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.377701993 +0000 UTC m=+0.068491488 container create 5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:15:23 compute-0 systemd[1]: Started libpod-conmon-5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756.scope.
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.349258767 +0000 UTC m=+0.040048322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:15:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.490261244 +0000 UTC m=+0.181050739 container init 5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.503368878 +0000 UTC m=+0.194158343 container start 5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.507575446 +0000 UTC m=+0.198364951 container attach 5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:15:23 compute-0 wonderful_cannon[147155]: 167 167
Nov 25 20:15:23 compute-0 systemd[1]: libpod-5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756.scope: Deactivated successfully.
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.51558427 +0000 UTC m=+0.206373775 container died 5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1697f3f47656f483dc0147c42245fd908662c4b1016f48edeae68b86636ca50a-merged.mount: Deactivated successfully.
Nov 25 20:15:23 compute-0 podman[147115]: 2025-11-25 20:15:23.565386771 +0000 UTC m=+0.256176276 container remove 5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:15:23 compute-0 systemd[1]: libpod-conmon-5fce9873c401aec0b9473090d07ab3437a13defe30db97948e61724ed609f756.scope: Deactivated successfully.
Nov 25 20:15:23 compute-0 sudo[147254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lilmxtlwmunukkgdcbdstcxltanzlhdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101723.0991518-581-162645017324325/AnsiballZ_copy.py'
Nov 25 20:15:23 compute-0 sudo[147254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:23 compute-0 podman[147252]: 2025-11-25 20:15:23.764349005 +0000 UTC m=+0.053992748 container create b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 20:15:23 compute-0 systemd[1]: Started libpod-conmon-b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888.scope.
Nov 25 20:15:23 compute-0 podman[147252]: 2025-11-25 20:15:23.73516008 +0000 UTC m=+0.024803823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:15:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb814e6eb4097ad491bcec809e72de8520a80b83f9ac78398ff3406e7264d54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb814e6eb4097ad491bcec809e72de8520a80b83f9ac78398ff3406e7264d54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb814e6eb4097ad491bcec809e72de8520a80b83f9ac78398ff3406e7264d54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb814e6eb4097ad491bcec809e72de8520a80b83f9ac78398ff3406e7264d54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb814e6eb4097ad491bcec809e72de8520a80b83f9ac78398ff3406e7264d54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:23 compute-0 podman[147252]: 2025-11-25 20:15:23.917168153 +0000 UTC m=+0.206811996 container init b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:15:23 compute-0 podman[147252]: 2025-11-25 20:15:23.930262217 +0000 UTC m=+0.219906000 container start b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:15:23 compute-0 podman[147252]: 2025-11-25 20:15:23.934977238 +0000 UTC m=+0.224621041 container attach b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_northcutt, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:15:23 compute-0 python3.9[147266]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101723.0991518-581-162645017324325/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:15:23 compute-0 sudo[147254]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:24 compute-0 ceph-mon[75144]: pgmap v313: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:24 compute-0 sudo[147349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkakjnbjpqxuwingpueplhkkqzqsgdfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101723.0991518-581-162645017324325/AnsiballZ_systemd.py'
Nov 25 20:15:24 compute-0 sudo[147349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:24 compute-0 python3.9[147351]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:15:24 compute-0 systemd[1]: Reloading.
Nov 25 20:15:24 compute-0 systemd-rc-local-generator[147388]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:15:24 compute-0 systemd-sysv-generator[147391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:15:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v314: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:25 compute-0 sudo[147349]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:25 compute-0 unruffled_northcutt[147271]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:15:25 compute-0 unruffled_northcutt[147271]: --> relative data size: 1.0
Nov 25 20:15:25 compute-0 unruffled_northcutt[147271]: --> All data devices are unavailable
Nov 25 20:15:25 compute-0 systemd[1]: libpod-b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888.scope: Deactivated successfully.
Nov 25 20:15:25 compute-0 systemd[1]: libpod-b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888.scope: Consumed 1.174s CPU time.
Nov 25 20:15:25 compute-0 podman[147252]: 2025-11-25 20:15:25.170087792 +0000 UTC m=+1.459731545 container died b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb814e6eb4097ad491bcec809e72de8520a80b83f9ac78398ff3406e7264d54-merged.mount: Deactivated successfully.
Nov 25 20:15:25 compute-0 podman[147252]: 2025-11-25 20:15:25.253763286 +0000 UTC m=+1.543407059 container remove b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:15:25 compute-0 systemd[1]: libpod-conmon-b8d7eb641cde92e3e369dfc7b36320fe81237af5fea5e133502933b8b77ad888.scope: Deactivated successfully.
Nov 25 20:15:25 compute-0 sudo[147000]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:25 compute-0 sudo[147471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:25 compute-0 sudo[147471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:25 compute-0 sudo[147471]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:25 compute-0 sudo[147527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcydeldgxaerkbxfesxqbtqnlxizgnmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101723.0991518-581-162645017324325/AnsiballZ_systemd.py'
Nov 25 20:15:25 compute-0 sudo[147527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:25 compute-0 sudo[147521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:15:25 compute-0 sudo[147521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:25 compute-0 sudo[147521]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:25 compute-0 sudo[147551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:25 compute-0 sudo[147551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:25 compute-0 sudo[147551]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:25 compute-0 sudo[147576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:15:25 compute-0 sudo[147576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:25 compute-0 python3.9[147548]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:15:25 compute-0 systemd[1]: Reloading.
Nov 25 20:15:25 compute-0 systemd-sysv-generator[147661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:15:25 compute-0 systemd-rc-local-generator[147656]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:15:26 compute-0 ceph-mon[75144]: pgmap v314: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.053815064 +0000 UTC m=+0.070577531 container create f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_boyd, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:15:26 compute-0 systemd[1]: Started libpod-conmon-f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f.scope.
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.020762601 +0000 UTC m=+0.037525098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:15:26 compute-0 systemd[1]: Starting ovn_controller container...
Nov 25 20:15:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.174282177 +0000 UTC m=+0.191044644 container init f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.183897941 +0000 UTC m=+0.200660438 container start f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_boyd, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.191386223 +0000 UTC m=+0.208148690 container attach f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:15:26 compute-0 dazzling_boyd[147697]: 167 167
Nov 25 20:15:26 compute-0 systemd[1]: libpod-f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f.scope: Deactivated successfully.
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.196579625 +0000 UTC m=+0.213342132 container died f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:15:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc13136f725f525e9cb57e91b2c124abef7445bf489c7dec8f49b3ba1c890ee6-merged.mount: Deactivated successfully.
Nov 25 20:15:26 compute-0 podman[147679]: 2025-11-25 20:15:26.240165067 +0000 UTC m=+0.256927534 container remove f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_boyd, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:15:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c0fa54dceba29207875cc9d00565912fbb684e104222cff99b6d3a580cfa42/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:26 compute-0 systemd[1]: libpod-conmon-f65064ea6ead71d18ecc588cc7999ae426df07496c73559f466666f89f80349f.scope: Deactivated successfully.
Nov 25 20:15:26 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b.
Nov 25 20:15:26 compute-0 podman[147701]: 2025-11-25 20:15:26.317143761 +0000 UTC m=+0.170300195 container init eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + sudo -E kolla_set_configs
Nov 25 20:15:26 compute-0 podman[147701]: 2025-11-25 20:15:26.350508862 +0000 UTC m=+0.203665246 container start eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:15:26 compute-0 edpm-start-podman-container[147701]: ovn_controller
Nov 25 20:15:26 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 25 20:15:26 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 20:15:26 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 20:15:26 compute-0 edpm-start-podman-container[147699]: Creating additional drop-in dependency for "ovn_controller" (eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b)
Nov 25 20:15:26 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 25 20:15:26 compute-0 systemd[147781]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 25 20:15:26 compute-0 podman[147746]: 2025-11-25 20:15:26.448472131 +0000 UTC m=+0.068221832 container create 0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_carver, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:15:26 compute-0 systemd[1]: Reloading.
Nov 25 20:15:26 compute-0 podman[147739]: 2025-11-25 20:15:26.481875913 +0000 UTC m=+0.109739201 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:15:26 compute-0 podman[147746]: 2025-11-25 20:15:26.416993638 +0000 UTC m=+0.036743369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:15:26 compute-0 systemd-rc-local-generator[147838]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:15:26 compute-0 systemd-sysv-generator[147841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:15:26 compute-0 systemd[147781]: Queued start job for default target Main User Target.
Nov 25 20:15:26 compute-0 systemd[147781]: Created slice User Application Slice.
Nov 25 20:15:26 compute-0 systemd[147781]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 20:15:26 compute-0 systemd[147781]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 20:15:26 compute-0 systemd[147781]: Reached target Paths.
Nov 25 20:15:26 compute-0 systemd[147781]: Reached target Timers.
Nov 25 20:15:26 compute-0 systemd[147781]: Starting D-Bus User Message Bus Socket...
Nov 25 20:15:26 compute-0 systemd[147781]: Starting Create User's Volatile Files and Directories...
Nov 25 20:15:26 compute-0 systemd[147781]: Listening on D-Bus User Message Bus Socket.
Nov 25 20:15:26 compute-0 systemd[147781]: Reached target Sockets.
Nov 25 20:15:26 compute-0 systemd[147781]: Finished Create User's Volatile Files and Directories.
Nov 25 20:15:26 compute-0 systemd[147781]: Reached target Basic System.
Nov 25 20:15:26 compute-0 systemd[147781]: Reached target Main User Target.
Nov 25 20:15:26 compute-0 systemd[147781]: Startup finished in 154ms.
Nov 25 20:15:26 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 25 20:15:26 compute-0 systemd[1]: Started ovn_controller container.
Nov 25 20:15:26 compute-0 systemd[1]: eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b-2d2131f9511ee093.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:15:26 compute-0 systemd[1]: eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b-2d2131f9511ee093.service: Failed with result 'exit-code'.
Nov 25 20:15:26 compute-0 systemd[1]: Started libpod-conmon-0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd.scope.
Nov 25 20:15:26 compute-0 systemd[1]: Started Session c1 of User root.
Nov 25 20:15:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4587822db1a43439a073db72421f3533af9de3840a5f06f9ca5ce624be0b97fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4587822db1a43439a073db72421f3533af9de3840a5f06f9ca5ce624be0b97fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4587822db1a43439a073db72421f3533af9de3840a5f06f9ca5ce624be0b97fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4587822db1a43439a073db72421f3533af9de3840a5f06f9ca5ce624be0b97fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:26 compute-0 sudo[147527]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:26 compute-0 podman[147746]: 2025-11-25 20:15:26.791328606 +0000 UTC m=+0.411078307 container init 0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_carver, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:15:26 compute-0 podman[147746]: 2025-11-25 20:15:26.805267251 +0000 UTC m=+0.425016952 container start 0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:15:26 compute-0 ovn_controller[147726]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:15:26 compute-0 ovn_controller[147726]: INFO:__main__:Validating config file
Nov 25 20:15:26 compute-0 ovn_controller[147726]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:15:26 compute-0 ovn_controller[147726]: INFO:__main__:Writing out command to execute
Nov 25 20:15:26 compute-0 podman[147746]: 2025-11-25 20:15:26.809787197 +0000 UTC m=+0.429536918 container attach 0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:15:26 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:15:26 compute-0 ovn_controller[147726]: ++ cat /run_command
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + ARGS=
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + sudo kolla_copy_cacerts
Nov 25 20:15:26 compute-0 systemd[1]: Started Session c2 of User root.
Nov 25 20:15:26 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + [[ ! -n '' ]]
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + . kolla_extend_start
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + umask 0022
Nov 25 20:15:26 compute-0 ovn_controller[147726]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 20:15:26 compute-0 ovn_controller[147726]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 20:15:26 compute-0 NetworkManager[49051]: <info>  [1764101726.8931] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 20:15:26 compute-0 NetworkManager[49051]: <info>  [1764101726.8944] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:15:26 compute-0 NetworkManager[49051]: <info>  [1764101726.8967] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 20:15:26 compute-0 NetworkManager[49051]: <info>  [1764101726.8981] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 20:15:26 compute-0 NetworkManager[49051]: <info>  [1764101726.8990] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:15:26 compute-0 kernel: br-int: entered promiscuous mode
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00010|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00013|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00014|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00015|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00018|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00019|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00020|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00021|main|INFO|OVS feature set changed, force recompute.
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 20:15:26 compute-0 ovn_controller[147726]: 2025-11-25T20:15:26Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 20:15:26 compute-0 systemd-udevd[147888]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:15:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v315: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:27 compute-0 sudo[148016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebyvrhzjpcepkdvymfdjastyjxjkljqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101727.0224369-609-55149788361898/AnsiballZ_command.py'
Nov 25 20:15:27 compute-0 sudo[148016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:27 compute-0 blissful_carver[147848]: {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:     "0": [
Nov 25 20:15:27 compute-0 blissful_carver[147848]:         {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "devices": [
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "/dev/loop3"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             ],
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_name": "ceph_lv0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_size": "21470642176",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "name": "ceph_lv0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "tags": {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cluster_name": "ceph",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.crush_device_class": "",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.encrypted": "0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osd_id": "0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.type": "block",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.vdo": "0"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             },
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "type": "block",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "vg_name": "ceph_vg0"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:         }
Nov 25 20:15:27 compute-0 blissful_carver[147848]:     ],
Nov 25 20:15:27 compute-0 blissful_carver[147848]:     "1": [
Nov 25 20:15:27 compute-0 blissful_carver[147848]:         {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "devices": [
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "/dev/loop4"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             ],
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_name": "ceph_lv1",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_size": "21470642176",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "name": "ceph_lv1",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "tags": {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cluster_name": "ceph",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.crush_device_class": "",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.encrypted": "0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osd_id": "1",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.type": "block",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.vdo": "0"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             },
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "type": "block",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "vg_name": "ceph_vg1"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:         }
Nov 25 20:15:27 compute-0 blissful_carver[147848]:     ],
Nov 25 20:15:27 compute-0 blissful_carver[147848]:     "2": [
Nov 25 20:15:27 compute-0 blissful_carver[147848]:         {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "devices": [
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "/dev/loop5"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             ],
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_name": "ceph_lv2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_size": "21470642176",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "name": "ceph_lv2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "tags": {
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.cluster_name": "ceph",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.crush_device_class": "",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.encrypted": "0",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osd_id": "2",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.type": "block",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:                 "ceph.vdo": "0"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             },
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "type": "block",
Nov 25 20:15:27 compute-0 blissful_carver[147848]:             "vg_name": "ceph_vg2"
Nov 25 20:15:27 compute-0 blissful_carver[147848]:         }
Nov 25 20:15:27 compute-0 blissful_carver[147848]:     ]
Nov 25 20:15:27 compute-0 blissful_carver[147848]: }
Nov 25 20:15:27 compute-0 systemd[1]: libpod-0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd.scope: Deactivated successfully.
Nov 25 20:15:27 compute-0 podman[147746]: 2025-11-25 20:15:27.567899754 +0000 UTC m=+1.187649455 container died 0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_carver, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4587822db1a43439a073db72421f3533af9de3840a5f06f9ca5ce624be0b97fe-merged.mount: Deactivated successfully.
Nov 25 20:15:27 compute-0 podman[147746]: 2025-11-25 20:15:27.639284115 +0000 UTC m=+1.259033826 container remove 0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_carver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:15:27 compute-0 systemd[1]: libpod-conmon-0606e0c4ce34dd4e4032a6e63571d25b0a2f9941f2c735fd3bfcc9e46ce15abd.scope: Deactivated successfully.
Nov 25 20:15:27 compute-0 python3.9[148020]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:15:27 compute-0 sudo[147576]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:27 compute-0 ovs-vsctl[148037]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 20:15:27 compute-0 sudo[148016]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:27 compute-0 sudo[148038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:27 compute-0 sudo[148038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:27 compute-0 sudo[148038]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:27 compute-0 sudo[148067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:15:27 compute-0 sudo[148067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:27 compute-0 sudo[148067]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00024|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00025|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00026|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00027|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00028|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00029|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00030|main|INFO|OVS feature set changed, force recompute.
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:15:27 compute-0 ovn_controller[147726]: 2025-11-25T20:15:27Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:15:27 compute-0 NetworkManager[49051]: <info>  [1764101727.9203] manager: (ovn-616d13-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 20:15:27 compute-0 sudo[148112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:27 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 20:15:27 compute-0 systemd-udevd[147890]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:15:27 compute-0 NetworkManager[49051]: <info>  [1764101727.9433] device (genev_sys_6081): carrier: link connected
Nov 25 20:15:27 compute-0 NetworkManager[49051]: <info>  [1764101727.9438] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 20:15:27 compute-0 sudo[148112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:27 compute-0 sudo[148112]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:28 compute-0 ceph-mon[75144]: pgmap v315: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:28 compute-0 sudo[148161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:15:28 compute-0 sudo[148161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:28 compute-0 sudo[148329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayskkdpjjlootwrjolraynjalcoeliki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101727.9736814-617-65209301920380/AnsiballZ_command.py'
Nov 25 20:15:28 compute-0 sudo[148329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.46316737 +0000 UTC m=+0.062676949 container create 7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:15:28 compute-0 systemd[1]: Started libpod-conmon-7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629.scope.
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.432650772 +0000 UTC m=+0.032160411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:15:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:28 compute-0 python3.9[148332]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.567316447 +0000 UTC m=+0.166826086 container init 7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.581614201 +0000 UTC m=+0.181123780 container start 7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.585571333 +0000 UTC m=+0.185080902 container attach 7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:15:28 compute-0 busy_sinoussi[148349]: 167 167
Nov 25 20:15:28 compute-0 systemd[1]: libpod-7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629.scope: Deactivated successfully.
Nov 25 20:15:28 compute-0 ovs-vsctl[148354]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 20:15:28 compute-0 conmon[148349]: conmon 7cd7a7f43f5882187cc2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629.scope/container/memory.events
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.595389863 +0000 UTC m=+0.194899432 container died 7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 20:15:28 compute-0 sudo[148329]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ab22961b097aa74c7d8f0cd0e946e5665dd1bc10cab29eb4441edddd2bbf86a-merged.mount: Deactivated successfully.
Nov 25 20:15:28 compute-0 podman[148333]: 2025-11-25 20:15:28.641469679 +0000 UTC m=+0.240979228 container remove 7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:15:28 compute-0 systemd[1]: libpod-conmon-7cd7a7f43f5882187cc212121824e9a5ef0aa92bb7854858749c059edd53d629.scope: Deactivated successfully.
Nov 25 20:15:28 compute-0 podman[148400]: 2025-11-25 20:15:28.869840373 +0000 UTC m=+0.061844878 container create 9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:15:28 compute-0 systemd[1]: Started libpod-conmon-9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12.scope.
Nov 25 20:15:28 compute-0 podman[148400]: 2025-11-25 20:15:28.849944286 +0000 UTC m=+0.041948821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:15:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v316: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908f36ef179dee2d4c6e7a411c7444bf8ad675b7e5619467470c767fe35e45d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908f36ef179dee2d4c6e7a411c7444bf8ad675b7e5619467470c767fe35e45d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908f36ef179dee2d4c6e7a411c7444bf8ad675b7e5619467470c767fe35e45d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908f36ef179dee2d4c6e7a411c7444bf8ad675b7e5619467470c767fe35e45d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:15:28 compute-0 podman[148400]: 2025-11-25 20:15:28.978380892 +0000 UTC m=+0.170385477 container init 9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:15:28 compute-0 podman[148400]: 2025-11-25 20:15:28.990598824 +0000 UTC m=+0.182603349 container start 9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:15:28 compute-0 podman[148400]: 2025-11-25 20:15:28.996012732 +0000 UTC m=+0.188017317 container attach 9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:15:29 compute-0 sudo[148555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmamlcjlvwdseleediopqmzrinqdhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101729.1460266-631-11524819253602/AnsiballZ_command.py'
Nov 25 20:15:29 compute-0 sudo[148555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:30 compute-0 python3.9[148560]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:15:30 compute-0 ovs-vsctl[148577]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 20:15:30 compute-0 quizzical_wright[148417]: {
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "osd_id": 2,
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "type": "bluestore"
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:     },
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "osd_id": 1,
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "type": "bluestore"
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:     },
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "osd_id": 0,
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:         "type": "bluestore"
Nov 25 20:15:30 compute-0 quizzical_wright[148417]:     }
Nov 25 20:15:30 compute-0 quizzical_wright[148417]: }
Nov 25 20:15:30 compute-0 ceph-mon[75144]: pgmap v316: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:30 compute-0 systemd[1]: libpod-9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12.scope: Deactivated successfully.
Nov 25 20:15:30 compute-0 systemd[1]: libpod-9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12.scope: Consumed 1.080s CPU time.
Nov 25 20:15:30 compute-0 podman[148400]: 2025-11-25 20:15:30.074934822 +0000 UTC m=+1.266939377 container died 9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wright, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:15:30 compute-0 sudo[148555]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-908f36ef179dee2d4c6e7a411c7444bf8ad675b7e5619467470c767fe35e45d4-merged.mount: Deactivated successfully.
Nov 25 20:15:30 compute-0 podman[148400]: 2025-11-25 20:15:30.160889995 +0000 UTC m=+1.352894490 container remove 9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:15:30 compute-0 systemd[1]: libpod-conmon-9c6a80bcb781c7de7f38a11096a568ab720ca9b3a75acb54b51f5dd380255a12.scope: Deactivated successfully.
Nov 25 20:15:30 compute-0 sudo[148161]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:15:30 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:15:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:15:30 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:15:30 compute-0 sudo[148616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:15:30 compute-0 sudo[148616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:30 compute-0 sudo[148616]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:30 compute-0 sudo[148641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:15:30 compute-0 sudo[148641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:15:30 compute-0 sudo[148641]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:30 compute-0 sshd-session[135704]: Connection closed by 192.168.122.30 port 59318
Nov 25 20:15:30 compute-0 sshd-session[135701]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:15:30 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 20:15:30 compute-0 systemd[1]: session-46.scope: Consumed 1min 9.184s CPU time.
Nov 25 20:15:30 compute-0 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Nov 25 20:15:30 compute-0 systemd-logind[789]: Removed session 46.
Nov 25 20:15:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v317: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:15:31 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:15:31 compute-0 ceph-mon[75144]: pgmap v317: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v318: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:34 compute-0 ceph-mon[75144]: pgmap v318: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v319: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:36 compute-0 ceph-mon[75144]: pgmap v319: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:36 compute-0 sshd-session[148670]: Accepted publickey for zuul from 192.168.122.30 port 60872 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:15:36 compute-0 systemd-logind[789]: New session 48 of user zuul.
Nov 25 20:15:36 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 25 20:15:36 compute-0 sshd-session[148670]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:15:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v320: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:37 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 25 20:15:37 compute-0 systemd[147781]: Activating special unit Exit the Session...
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped target Main User Target.
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped target Basic System.
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped target Paths.
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped target Sockets.
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped target Timers.
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 20:15:37 compute-0 systemd[147781]: Closed D-Bus User Message Bus Socket.
Nov 25 20:15:37 compute-0 systemd[147781]: Stopped Create User's Volatile Files and Directories.
Nov 25 20:15:37 compute-0 systemd[147781]: Removed slice User Application Slice.
Nov 25 20:15:37 compute-0 systemd[147781]: Reached target Shutdown.
Nov 25 20:15:37 compute-0 systemd[147781]: Finished Exit the Session.
Nov 25 20:15:37 compute-0 systemd[147781]: Reached target Exit the Session.
Nov 25 20:15:37 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 20:15:37 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 25 20:15:37 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 20:15:37 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 20:15:37 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 20:15:37 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 20:15:37 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 20:15:37 compute-0 python3.9[148824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:15:38 compute-0 ceph-mon[75144]: pgmap v320: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:38 compute-0 sudo[148978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gobmypchtifexxaqncinywsbioppdfsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101738.3545282-34-208279819063210/AnsiballZ_file.py'
Nov 25 20:15:38 compute-0 sudo[148978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v321: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:39 compute-0 python3.9[148980]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:39 compute-0 sudo[148978]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:39 compute-0 sudo[149130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkscwuoudqtfngmdtmhasuzkbelhiuom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101739.2790399-34-8470524207767/AnsiballZ_file.py'
Nov 25 20:15:39 compute-0 sudo[149130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:39 compute-0 python3.9[149132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:39 compute-0 sudo[149130]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:40 compute-0 ceph-mon[75144]: pgmap v321: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:40 compute-0 sudo[149282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-friwoxrgbuzkeevgbgpzhrnlzifxojoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101740.1203525-34-77396585269303/AnsiballZ_file.py'
Nov 25 20:15:40 compute-0 sudo[149282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:40 compute-0 python3.9[149284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:40 compute-0 sudo[149282]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v322: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:41 compute-0 sudo[149434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpfabbvvvqcxkxwcojzljnszxmrgnzes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101740.9231012-34-193652880561051/AnsiballZ_file.py'
Nov 25 20:15:41 compute-0 sudo[149434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:41 compute-0 python3.9[149436]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:41 compute-0 sudo[149434]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:42 compute-0 ceph-mon[75144]: pgmap v322: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:42 compute-0 sudo[149586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iijxytemxqamzcqspliejaqjrxksnzxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101741.742493-34-19674687335354/AnsiballZ_file.py'
Nov 25 20:15:42 compute-0 sudo[149586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:42 compute-0 python3.9[149588]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:42 compute-0 sudo[149586]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v323: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:43 compute-0 python3.9[149738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:15:44 compute-0 ceph-mon[75144]: pgmap v323: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:44 compute-0 sudo[149888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvqpkrqookyimbvoccjjlutwecizooa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101743.5376709-78-243066776671221/AnsiballZ_seboolean.py'
Nov 25 20:15:44 compute-0 sudo[149888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:44 compute-0 python3.9[149890]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 20:15:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v324: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:44 compute-0 sudo[149888]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:45 compute-0 python3.9[150040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:46 compute-0 ceph-mon[75144]: pgmap v324: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:46 compute-0 python3.9[150162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101745.2213135-86-260600833550211/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v325: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:47 compute-0 python3.9[150312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:48 compute-0 ceph-mon[75144]: pgmap v325: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:48 compute-0 python3.9[150433]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101747.0077567-101-122533810646849/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:48 compute-0 sudo[150583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjfqzjohvatpmhspnryqyarbyjbywgzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101748.6479719-118-219033782886710/AnsiballZ_setup.py'
Nov 25 20:15:48 compute-0 sudo[150583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v326: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:49 compute-0 python3.9[150585]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:15:49 compute-0 sudo[150583]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:50 compute-0 sudo[150667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsmuzhtfwbaldsegstzvgmxuurfcyzci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101748.6479719-118-219033782886710/AnsiballZ_dnf.py'
Nov 25 20:15:50 compute-0 sudo[150667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:50 compute-0 ceph-mon[75144]: pgmap v326: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:50 compute-0 python3.9[150669]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:15:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v327: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:51 compute-0 sudo[150667]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:52 compute-0 ceph-mon[75144]: pgmap v327: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:52 compute-0 sudo[150820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmeiypntzxsqlwzbgaprscgacuwlysks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101751.76229-130-164402018633482/AnsiballZ_systemd.py'
Nov 25 20:15:52 compute-0 sudo[150820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:15:52 compute-0 python3.9[150822]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:15:52 compute-0 sudo[150820]: pam_unix(sudo:session): session closed for user root
Nov 25 20:15:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v328: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:53 compute-0 python3.9[150975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:54 compute-0 ceph-mon[75144]: pgmap v328: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:54 compute-0 python3.9[151096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101753.1611032-138-205222377592220/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v329: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:55 compute-0 python3.9[151246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:55 compute-0 python3.9[151367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101754.7000968-138-13085805592017/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:56 compute-0 ceph-mon[75144]: pgmap v329: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:15:56
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:15:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v330: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:57 compute-0 ovn_controller[147726]: 2025-11-25T20:15:57Z|00031|memory|INFO|17024 kB peak resident set size after 30.1 seconds
Nov 25 20:15:57 compute-0 ovn_controller[147726]: 2025-11-25T20:15:57Z|00032|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 20:15:57 compute-0 podman[151421]: 2025-11-25 20:15:57.06254247 +0000 UTC m=+0.151320471 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:15:57 compute-0 python3.9[151542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:58 compute-0 ceph-mon[75144]: pgmap v330: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:58 compute-0 python3.9[151663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101756.8337765-182-255046686037053/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:15:58 compute-0 python3.9[151813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:15:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v331: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:15:59 compute-0 python3.9[151934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101758.337713-182-268600670571675/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:00 compute-0 ceph-mon[75144]: pgmap v331: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:00 compute-0 python3.9[152084]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:16:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v332: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:01 compute-0 sudo[152236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ensrxxplmewgqvyhhlwuxngmnprxnklc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101760.6271324-220-4311533863290/AnsiballZ_file.py'
Nov 25 20:16:01 compute-0 sudo[152236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:01 compute-0 python3.9[152238]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:01 compute-0 sudo[152236]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:01 compute-0 sudo[152388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbodlaisfhapqbgzctthelgdashztcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101761.4625301-228-11936259867890/AnsiballZ_stat.py'
Nov 25 20:16:01 compute-0 sudo[152388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:16:02 compute-0 python3.9[152390]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:02 compute-0 sudo[152388]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:02 compute-0 ceph-mon[75144]: pgmap v332: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:02 compute-0 sudo[152466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjgaaqtbgahfuhdnjtzoyeetfcozypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101761.4625301-228-11936259867890/AnsiballZ_file.py'
Nov 25 20:16:02 compute-0 sudo[152466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:02 compute-0 python3.9[152468]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:02 compute-0 sudo[152466]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v333: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:03 compute-0 sudo[152618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaoprpqjmmnilrqjmcjalpneboejgzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101762.8342235-228-90196926248657/AnsiballZ_stat.py'
Nov 25 20:16:03 compute-0 ceph-mon[75144]: pgmap v333: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:03 compute-0 sudo[152618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:03 compute-0 python3.9[152620]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:03 compute-0 sudo[152618]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:03 compute-0 sudo[152696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nibipnqmzcruacciubonhxgoowerwdcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101762.8342235-228-90196926248657/AnsiballZ_file.py'
Nov 25 20:16:03 compute-0 sudo[152696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:03 compute-0 python3.9[152698]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:04 compute-0 sudo[152696]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:04 compute-0 sudo[152848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofclzeoizonyrinpqmkucynafpuvgzza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101764.2578998-251-182500213467230/AnsiballZ_file.py'
Nov 25 20:16:04 compute-0 sudo[152848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:04 compute-0 python3.9[152850]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:04 compute-0 sudo[152848]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v334: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:05 compute-0 sudo[153000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atvdupvawcitvaphhnlpgyijwbrzgimr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101765.0936217-259-22682548233822/AnsiballZ_stat.py'
Nov 25 20:16:05 compute-0 sudo[153000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:05 compute-0 python3.9[153002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:05 compute-0 sudo[153000]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:06 compute-0 sudo[153078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjlspgxvlotshfmemwzrwigjdyrbuosx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101765.0936217-259-22682548233822/AnsiballZ_file.py'
Nov 25 20:16:06 compute-0 sudo[153078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:06 compute-0 ceph-mon[75144]: pgmap v334: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:06 compute-0 python3.9[153080]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:06 compute-0 sudo[153078]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:06 compute-0 sudo[153230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmkylpvfrplzrrsuubewjcskvgwxqaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101766.4795108-271-187843269736257/AnsiballZ_stat.py'
Nov 25 20:16:06 compute-0 sudo[153230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v335: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:07 compute-0 python3.9[153232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:07 compute-0 sudo[153230]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:07 compute-0 sudo[153308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pptlqjshfasvxysrnbdvxubbuultjjto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101766.4795108-271-187843269736257/AnsiballZ_file.py'
Nov 25 20:16:07 compute-0 sudo[153308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:07 compute-0 python3.9[153310]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:07 compute-0 sudo[153308]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:08 compute-0 ceph-mon[75144]: pgmap v335: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:08 compute-0 sudo[153460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chzpykysuwuwashbcjjpbkyvmwzrflyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101768.0199924-283-260778351386493/AnsiballZ_systemd.py'
Nov 25 20:16:08 compute-0 sudo[153460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:08 compute-0 python3.9[153462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:16:08 compute-0 systemd[1]: Reloading.
Nov 25 20:16:08 compute-0 systemd-rc-local-generator[153491]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:16:08 compute-0 systemd-sysv-generator[153494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:16:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v336: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:09 compute-0 sudo[153460]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:09 compute-0 sudo[153649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jophsgdpwspjkdaycbyqirglohwbmujt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101769.4796336-291-105181617231211/AnsiballZ_stat.py'
Nov 25 20:16:09 compute-0 sudo[153649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:10 compute-0 ceph-mon[75144]: pgmap v336: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:10 compute-0 python3.9[153651]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:10 compute-0 sudo[153649]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:10 compute-0 sudo[153727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwynbhmvtfgkdhckigqrqyenhwtpthgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101769.4796336-291-105181617231211/AnsiballZ_file.py'
Nov 25 20:16:10 compute-0 sudo[153727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:10 compute-0 python3.9[153729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:10 compute-0 sudo[153727]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v337: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:11 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:11 compute-0 sudo[153879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzprmcbovtvtfzmwdbszmbpeqkcqopaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101771.0422206-303-38534221940227/AnsiballZ_stat.py'
Nov 25 20:16:11 compute-0 sudo[153879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:11 compute-0 python3.9[153881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:11 compute-0 sudo[153879]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:12 compute-0 sudo[153957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqcfvnkzgrchtbszzfpjvwfwuwhyhero ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101771.0422206-303-38534221940227/AnsiballZ_file.py'
Nov 25 20:16:12 compute-0 sudo[153957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:12 compute-0 ceph-mon[75144]: pgmap v337: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:12 compute-0 python3.9[153959]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:12 compute-0 sudo[153957]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:12 compute-0 sudo[154109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gakwhjmvmixznoswbpuarpbvdbuyaiff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101772.5013964-315-192553975571067/AnsiballZ_systemd.py'
Nov 25 20:16:12 compute-0 sudo[154109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v338: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:13 compute-0 python3.9[154111]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:16:13 compute-0 systemd[1]: Reloading.
Nov 25 20:16:13 compute-0 systemd-rc-local-generator[154140]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:16:13 compute-0 systemd-sysv-generator[154144]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:16:13 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 20:16:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 20:16:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 20:16:13 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 20:16:13 compute-0 sudo[154109]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:14 compute-0 ceph-mon[75144]: pgmap v338: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:14 compute-0 sudo[154304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbzdnubkbqntigoblpvqkvqqhsjjglic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101773.9862053-325-199068133628981/AnsiballZ_file.py'
Nov 25 20:16:14 compute-0 sudo[154304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:14 compute-0 python3.9[154306]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:14 compute-0 sudo[154304]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v339: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:15 compute-0 sudo[154456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flkedixpeoyhcaiqkdnoyslxvxubmztk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101774.8166256-333-255291934549042/AnsiballZ_stat.py'
Nov 25 20:16:15 compute-0 sudo[154456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:15 compute-0 python3.9[154458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:15 compute-0 sudo[154456]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:15 compute-0 sudo[154579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nppdigqjkyjnovfyqmvswzkgsgvlzhey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101774.8166256-333-255291934549042/AnsiballZ_copy.py'
Nov 25 20:16:15 compute-0 sudo[154579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:16 compute-0 ceph-mon[75144]: pgmap v339: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:16 compute-0 python3.9[154581]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764101774.8166256-333-255291934549042/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:16 compute-0 sudo[154579]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:16 compute-0 sudo[154731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdtzroyzxxnkxdafjctnjbeazyuccex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101776.519536-350-199946810654335/AnsiballZ_file.py'
Nov 25 20:16:16 compute-0 sudo[154731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v340: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:17 compute-0 python3.9[154733]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:16:17 compute-0 sudo[154731]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:17 compute-0 sudo[154883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hicsylshxhoxjamoroiprsddshmlezez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101777.3938453-358-275057034927998/AnsiballZ_stat.py'
Nov 25 20:16:17 compute-0 sudo[154883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:18 compute-0 python3.9[154885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:16:18 compute-0 sudo[154883]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:18 compute-0 ceph-mon[75144]: pgmap v340: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:18 compute-0 sudo[155007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mplzkywsgjadzoaogasprwzfgrqsibcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101777.3938453-358-275057034927998/AnsiballZ_copy.py'
Nov 25 20:16:18 compute-0 sudo[155007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:18 compute-0 python3.9[155009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764101777.3938453-358-275057034927998/.source.json _original_basename=.qat639qo follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:18 compute-0 sudo[155007]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v341: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:19 compute-0 sudo[155159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvjkrigqgefwtdkfweqsblgtjfyinmgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101778.9845612-373-19972208525054/AnsiballZ_file.py'
Nov 25 20:16:19 compute-0 sudo[155159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v342: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v343: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v344: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:25 compute-0 python3.9[155161]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:25 compute-0 ceph-mon[75144]: pgmap v341: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:25 compute-0 ceph-mon[75144]: pgmap v342: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:25 compute-0 ceph-mon[75144]: pgmap v343: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:25 compute-0 sudo[155159]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:25 compute-0 sudo[155311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgautrnwazsutetuaxntbzxypolkwzro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101785.405485-381-64662755779997/AnsiballZ_stat.py'
Nov 25 20:16:25 compute-0 sudo[155311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:25 compute-0 sudo[155311]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:26 compute-0 ceph-mon[75144]: pgmap v344: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:26 compute-0 sudo[155434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkbzxhdlcdbrxmpzgenjwxmtcuddqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101785.405485-381-64662755779997/AnsiballZ_copy.py'
Nov 25 20:16:26 compute-0 sudo[155434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:26 compute-0 sudo[155434]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:16:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v345: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:27 compute-0 ceph-mon[75144]: pgmap v345: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:27 compute-0 sudo[155599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjxhncbkeqpakjsnscclyhiwmwqcsbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101786.8973043-398-38404336425554/AnsiballZ_container_config_data.py'
Nov 25 20:16:27 compute-0 sudo[155599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:27 compute-0 podman[155560]: 2025-11-25 20:16:27.558868408 +0000 UTC m=+0.147811600 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:16:27 compute-0 python3.9[155607]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 20:16:27 compute-0 sudo[155599]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:28 compute-0 sudo[155764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmukivujmxmonlpbxlufiqetscilwziv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101788.0294209-407-5804158523769/AnsiballZ_container_config_hash.py'
Nov 25 20:16:28 compute-0 sudo[155764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:28 compute-0 python3.9[155766]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:16:28 compute-0 sudo[155764]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v346: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:29 compute-0 sudo[155916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgowirxeumcrhtbovmavswqrhnfkqvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101789.1117039-416-41940025523138/AnsiballZ_podman_container_info.py'
Nov 25 20:16:29 compute-0 sudo[155916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:30 compute-0 ceph-mon[75144]: pgmap v346: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:30 compute-0 python3.9[155918]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 20:16:30 compute-0 sudo[155916]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:30 compute-0 sudo[155946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:30 compute-0 sudo[155946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:30 compute-0 sudo[155946]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:30 compute-0 sudo[155995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:16:30 compute-0 sudo[155995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:30 compute-0 sudo[155995]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:30 compute-0 sudo[156020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:30 compute-0 sudo[156020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:30 compute-0 sudo[156020]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:30 compute-0 sudo[156045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 20:16:30 compute-0 sudo[156045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v347: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:30 compute-0 sudo[156045]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:31 compute-0 sudo[156112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:31 compute-0 sudo[156112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:31 compute-0 sudo[156112]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:31 compute-0 sudo[156168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:16:31 compute-0 sudo[156168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:31 compute-0 sudo[156168]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:31 compute-0 sudo[156193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:31 compute-0 sudo[156193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:31 compute-0 sudo[156193]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:31 compute-0 sudo[156218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:16:31 compute-0 sudo[156218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:31 compute-0 sudo[156329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttznkqtsowaxxwoqvtjudtqxlrorcxnc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764101791.0259938-429-268760814934138/AnsiballZ_edpm_container_manage.py'
Nov 25 20:16:31 compute-0 sudo[156329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:31 compute-0 python3[156331]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:16:31 compute-0 sudo[156218]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev f5c90ec6-4995-4074-a3c8-e2ca18b5c665 does not exist
Nov 25 20:16:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev e009408f-8136-480e-b08f-30dde135db38 does not exist
Nov 25 20:16:31 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 59b8e68a-174a-4601-be5f-a5e3e55e40b9 does not exist
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:16:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:16:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:16:32 compute-0 ceph-mon[75144]: pgmap v347: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:16:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:16:32 compute-0 sudo[156374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:32 compute-0 sudo[156374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:32 compute-0 sudo[156374]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:32 compute-0 sudo[156399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:16:32 compute-0 sudo[156399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:32 compute-0 sudo[156399]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:32 compute-0 sudo[156424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:32 compute-0 sudo[156424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:32 compute-0 sudo[156424]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:32 compute-0 sudo[156449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:16:32 compute-0 sudo[156449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.676168045 +0000 UTC m=+0.055458830 container create 8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:16:32 compute-0 systemd[1]: Started libpod-conmon-8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c.scope.
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.658626713 +0000 UTC m=+0.037917548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:16:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.779764138 +0000 UTC m=+0.159054953 container init 8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.787386021 +0000 UTC m=+0.166676806 container start 8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.791112384 +0000 UTC m=+0.170403199 container attach 8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:16:32 compute-0 intelligent_panini[156545]: 167 167
Nov 25 20:16:32 compute-0 systemd[1]: libpod-8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c.scope: Deactivated successfully.
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.7933088 +0000 UTC m=+0.172599585 container died 8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c5d7d9ae44ea4aa4ff4c54c753d94c1f67ffa8b8e45842790024360922c2fa-merged.mount: Deactivated successfully.
Nov 25 20:16:32 compute-0 podman[156527]: 2025-11-25 20:16:32.854288027 +0000 UTC m=+0.233578812 container remove 8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:16:32 compute-0 systemd[1]: libpod-conmon-8d02b5b221780565578f2d4502ec80d70f3b0e75471ac76f1ddf7a22cb0ee32c.scope: Deactivated successfully.
Nov 25 20:16:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v348: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:33 compute-0 podman[156572]: 2025-11-25 20:16:33.030894292 +0000 UTC m=+0.051550601 container create 061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kowalevski, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:16:33 compute-0 systemd[1]: Started libpod-conmon-061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9.scope.
Nov 25 20:16:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586ddb788507057cc1bb506e99f9f93319a796b670a9608164c280747e51006c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586ddb788507057cc1bb506e99f9f93319a796b670a9608164c280747e51006c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586ddb788507057cc1bb506e99f9f93319a796b670a9608164c280747e51006c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586ddb788507057cc1bb506e99f9f93319a796b670a9608164c280747e51006c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586ddb788507057cc1bb506e99f9f93319a796b670a9608164c280747e51006c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:33 compute-0 podman[156572]: 2025-11-25 20:16:33.01415973 +0000 UTC m=+0.034816059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:16:33 compute-0 podman[156572]: 2025-11-25 20:16:33.119359504 +0000 UTC m=+0.140015833 container init 061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kowalevski, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:16:33 compute-0 podman[156572]: 2025-11-25 20:16:33.132055735 +0000 UTC m=+0.152712044 container start 061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kowalevski, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:16:33 compute-0 podman[156572]: 2025-11-25 20:16:33.135218765 +0000 UTC m=+0.155875074 container attach 061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:16:34 compute-0 ceph-mon[75144]: pgmap v348: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:34 compute-0 vigorous_kowalevski[156588]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:16:34 compute-0 vigorous_kowalevski[156588]: --> relative data size: 1.0
Nov 25 20:16:34 compute-0 vigorous_kowalevski[156588]: --> All data devices are unavailable
Nov 25 20:16:34 compute-0 systemd[1]: libpod-061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9.scope: Deactivated successfully.
Nov 25 20:16:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v349: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:35 compute-0 podman[156634]: 2025-11-25 20:16:35.44020816 +0000 UTC m=+1.232994835 container died 061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:16:36 compute-0 ceph-mon[75144]: pgmap v349: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v350: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:37 compute-0 ceph-mon[75144]: pgmap v350: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v351: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:40 compute-0 ceph-mon[75144]: pgmap v351: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-586ddb788507057cc1bb506e99f9f93319a796b670a9608164c280747e51006c-merged.mount: Deactivated successfully.
Nov 25 20:16:40 compute-0 podman[156634]: 2025-11-25 20:16:40.571371336 +0000 UTC m=+6.364158021 container remove 061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:16:40 compute-0 systemd[1]: libpod-conmon-061de9ea98b1f81cc48760548175fa3067f5a1003aa8b5f1e878bcaf880525e9.scope: Deactivated successfully.
Nov 25 20:16:40 compute-0 sudo[156449]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:40 compute-0 podman[156361]: 2025-11-25 20:16:40.637111315 +0000 UTC m=+8.718618425 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 20:16:40 compute-0 sudo[156703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:40 compute-0 sudo[156703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:40 compute-0 sudo[156703]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:40 compute-0 sudo[156740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:16:40 compute-0 sudo[156740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:40 compute-0 sudo[156740]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:40 compute-0 podman[156773]: 2025-11-25 20:16:40.830679537 +0000 UTC m=+0.050840723 container create df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 25 20:16:40 compute-0 podman[156773]: 2025-11-25 20:16:40.802555848 +0000 UTC m=+0.022717044 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 20:16:40 compute-0 python3[156331]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 20:16:40 compute-0 sudo[156783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:40 compute-0 sudo[156783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:40 compute-0 sudo[156783]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:40 compute-0 sudo[156824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:16:40 compute-0 sudo[156824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v352: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:41 compute-0 sudo[156329]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.313125537 +0000 UTC m=+0.059479871 container create 37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:16:41 compute-0 systemd[1]: Started libpod-conmon-37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7.scope.
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.28388077 +0000 UTC m=+0.030235174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:16:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.411128699 +0000 UTC m=+0.157483083 container init 37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.423941953 +0000 UTC m=+0.170296277 container start 37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ishizaka, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.428058536 +0000 UTC m=+0.174412940 container attach 37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:16:41 compute-0 zen_ishizaka[156993]: 167 167
Nov 25 20:16:41 compute-0 systemd[1]: libpod-37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7.scope: Deactivated successfully.
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.430034316 +0000 UTC m=+0.176388650 container died 37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a7b77780d834f938ff4e46e7858ed69581b72afce26ff3a652225ab8ba7926-merged.mount: Deactivated successfully.
Nov 25 20:16:41 compute-0 podman[156944]: 2025-11-25 20:16:41.477624287 +0000 UTC m=+0.223978591 container remove 37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:16:41 compute-0 systemd[1]: libpod-conmon-37147ea3ce2a57f36261c13da8a11f33563067f327578a1c91f218aa6cc665e7.scope: Deactivated successfully.
Nov 25 20:16:41 compute-0 sudo[157085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkigeqwpguuikqyjhpsbohdraceyqikz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101801.2664022-437-206829274757316/AnsiballZ_stat.py'
Nov 25 20:16:41 compute-0 sudo[157085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:41 compute-0 podman[157092]: 2025-11-25 20:16:41.747096264 +0000 UTC m=+0.076331516 container create a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:16:41 compute-0 systemd[1]: Started libpod-conmon-a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510.scope.
Nov 25 20:16:41 compute-0 podman[157092]: 2025-11-25 20:16:41.716721268 +0000 UTC m=+0.045956570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:16:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9323a316fc7dadd0408ebe74dae350ac83d6bba0e4232a8b109146bd7784b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9323a316fc7dadd0408ebe74dae350ac83d6bba0e4232a8b109146bd7784b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9323a316fc7dadd0408ebe74dae350ac83d6bba0e4232a8b109146bd7784b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9323a316fc7dadd0408ebe74dae350ac83d6bba0e4232a8b109146bd7784b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:41 compute-0 podman[157092]: 2025-11-25 20:16:41.863027968 +0000 UTC m=+0.192263210 container init a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hamilton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:16:41 compute-0 python3.9[157094]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:16:41 compute-0 podman[157092]: 2025-11-25 20:16:41.874231711 +0000 UTC m=+0.203466963 container start a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:16:41 compute-0 podman[157092]: 2025-11-25 20:16:41.879897714 +0000 UTC m=+0.209132986 container attach a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:16:41 compute-0 sudo[157085]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:42 compute-0 ceph-mon[75144]: pgmap v352: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:42 compute-0 sudo[157268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxbftglltfzvugxofvmdolgvxbsvhvxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101802.2136064-446-55900733671399/AnsiballZ_file.py'
Nov 25 20:16:42 compute-0 sudo[157268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]: {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:     "0": [
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:         {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "devices": [
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "/dev/loop3"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             ],
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_name": "ceph_lv0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_size": "21470642176",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "name": "ceph_lv0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "tags": {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cluster_name": "ceph",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.crush_device_class": "",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.encrypted": "0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osd_id": "0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.type": "block",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.vdo": "0"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             },
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "type": "block",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "vg_name": "ceph_vg0"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:         }
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:     ],
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:     "1": [
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:         {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "devices": [
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "/dev/loop4"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             ],
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_name": "ceph_lv1",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_size": "21470642176",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "name": "ceph_lv1",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "tags": {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cluster_name": "ceph",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.crush_device_class": "",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.encrypted": "0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osd_id": "1",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.type": "block",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.vdo": "0"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             },
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "type": "block",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "vg_name": "ceph_vg1"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:         }
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:     ],
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:     "2": [
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:         {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "devices": [
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "/dev/loop5"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             ],
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_name": "ceph_lv2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_size": "21470642176",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "name": "ceph_lv2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "tags": {
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.cluster_name": "ceph",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.crush_device_class": "",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.encrypted": "0",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osd_id": "2",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.type": "block",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:                 "ceph.vdo": "0"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             },
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "type": "block",
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:             "vg_name": "ceph_vg2"
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:         }
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]:     ]
Nov 25 20:16:42 compute-0 adoring_hamilton[157110]: }
Nov 25 20:16:42 compute-0 systemd[1]: libpod-a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510.scope: Deactivated successfully.
Nov 25 20:16:42 compute-0 podman[157092]: 2025-11-25 20:16:42.703741516 +0000 UTC m=+1.032976748 container died a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-43e9323a316fc7dadd0408ebe74dae350ac83d6bba0e4232a8b109146bd7784b-merged.mount: Deactivated successfully.
Nov 25 20:16:42 compute-0 podman[157092]: 2025-11-25 20:16:42.776292016 +0000 UTC m=+1.105527268 container remove a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:16:42 compute-0 systemd[1]: libpod-conmon-a00a186705d9441b4d5ab4ecd03354ed301962da237da97923df72b7711a3510.scope: Deactivated successfully.
Nov 25 20:16:42 compute-0 sudo[156824]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:42 compute-0 python3.9[157270]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:42 compute-0 sudo[157268]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:42 compute-0 sudo[157287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:42 compute-0 sudo[157287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:42 compute-0 sudo[157287]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:42 compute-0 sudo[157316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:16:42 compute-0 sudo[157316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:42 compute-0 sudo[157316]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v353: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:43 compute-0 sudo[157360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:43 compute-0 sudo[157360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:43 compute-0 sudo[157360]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:43 compute-0 sudo[157457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehrrjymnlzngribiccsgwqarhelsgrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101802.2136064-446-55900733671399/AnsiballZ_stat.py'
Nov 25 20:16:43 compute-0 sudo[157457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:43 compute-0 sudo[157413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:16:43 compute-0 sudo[157413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:43 compute-0 ceph-mon[75144]: pgmap v353: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:43 compute-0 python3.9[157460]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:16:43 compute-0 sudo[157457]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.49472686 +0000 UTC m=+0.062045906 container create 4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:16:43 compute-0 systemd[1]: Started libpod-conmon-4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964.scope.
Nov 25 20:16:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.562031697 +0000 UTC m=+0.129350783 container init 4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.475341421 +0000 UTC m=+0.042660497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.569604548 +0000 UTC m=+0.136923614 container start 4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:16:43 compute-0 elated_gould[157571]: 167 167
Nov 25 20:16:43 compute-0 systemd[1]: libpod-4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964.scope: Deactivated successfully.
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.574627065 +0000 UTC m=+0.141946121 container attach 4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.575122217 +0000 UTC m=+0.142441293 container died 4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e100f90725b35e06276f33ece5a7fd70461e647770caad060a5c5cff5e315e-merged.mount: Deactivated successfully.
Nov 25 20:16:43 compute-0 podman[157513]: 2025-11-25 20:16:43.617908207 +0000 UTC m=+0.185227253 container remove 4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:16:43 compute-0 systemd[1]: libpod-conmon-4b0a3f521545e7e9927eaa7ddf024ac39bc563afccf6c6c561e46a10cc720964.scope: Deactivated successfully.
Nov 25 20:16:43 compute-0 podman[157618]: 2025-11-25 20:16:43.844168214 +0000 UTC m=+0.065515014 container create 10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 20:16:43 compute-0 systemd[1]: Started libpod-conmon-10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330.scope.
Nov 25 20:16:43 compute-0 podman[157618]: 2025-11-25 20:16:43.814939427 +0000 UTC m=+0.036286277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:16:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082e7ab80e4e463f23f0c5694983b265124df8c54d7fd0b2a7bc0c4eea5ba4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082e7ab80e4e463f23f0c5694983b265124df8c54d7fd0b2a7bc0c4eea5ba4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082e7ab80e4e463f23f0c5694983b265124df8c54d7fd0b2a7bc0c4eea5ba4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082e7ab80e4e463f23f0c5694983b265124df8c54d7fd0b2a7bc0c4eea5ba4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:43 compute-0 podman[157618]: 2025-11-25 20:16:43.939092568 +0000 UTC m=+0.160439438 container init 10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:16:43 compute-0 podman[157618]: 2025-11-25 20:16:43.950117957 +0000 UTC m=+0.171464747 container start 10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:16:43 compute-0 podman[157618]: 2025-11-25 20:16:43.954723653 +0000 UTC m=+0.176070483 container attach 10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:16:44 compute-0 sudo[157712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmkcnmmzfsxpdjusqryfnecopqlvrea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101803.4556115-446-181930732566570/AnsiballZ_copy.py'
Nov 25 20:16:44 compute-0 sudo[157712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:44 compute-0 python3.9[157714]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764101803.4556115-446-181930732566570/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:16:44 compute-0 sudo[157712]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:44 compute-0 sudo[157789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdonxqpuywcfjzaihtqaobsreibfltwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101803.4556115-446-181930732566570/AnsiballZ_systemd.py'
Nov 25 20:16:44 compute-0 sudo[157789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:44 compute-0 python3.9[157792]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:16:44 compute-0 systemd[1]: Reloading.
Nov 25 20:16:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v354: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:45 compute-0 hungry_leakey[157657]: {
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "osd_id": 2,
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "type": "bluestore"
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:     },
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "osd_id": 1,
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "type": "bluestore"
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:     },
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "osd_id": 0,
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:         "type": "bluestore"
Nov 25 20:16:45 compute-0 hungry_leakey[157657]:     }
Nov 25 20:16:45 compute-0 hungry_leakey[157657]: }
Nov 25 20:16:45 compute-0 podman[157618]: 2025-11-25 20:16:45.037737563 +0000 UTC m=+1.259084363 container died 10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:16:45 compute-0 systemd-rc-local-generator[157841]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:16:45 compute-0 systemd-sysv-generator[157846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:16:45 compute-0 systemd[1]: libpod-10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330.scope: Deactivated successfully.
Nov 25 20:16:45 compute-0 systemd[1]: libpod-10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330.scope: Consumed 1.092s CPU time.
Nov 25 20:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5082e7ab80e4e463f23f0c5694983b265124df8c54d7fd0b2a7bc0c4eea5ba4d-merged.mount: Deactivated successfully.
Nov 25 20:16:45 compute-0 sudo[157789]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:45 compute-0 podman[157618]: 2025-11-25 20:16:45.33057556 +0000 UTC m=+1.551922320 container remove 10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:16:45 compute-0 systemd[1]: libpod-conmon-10a3194973edf36e529375ad3a56bbafdd2084fa5236f5d5816e23ecaf387330.scope: Deactivated successfully.
Nov 25 20:16:45 compute-0 sudo[157413]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:16:45 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:16:45 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:45 compute-0 sudo[157877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:16:45 compute-0 sudo[157877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:45 compute-0 sudo[157877]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:45 compute-0 sudo[157918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:16:45 compute-0 sudo[157918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:16:45 compute-0 sudo[157918]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:45 compute-0 sudo[157990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vppptrdpduamiscwqicdqgqzfhgbfkuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101803.4556115-446-181930732566570/AnsiballZ_systemd.py'
Nov 25 20:16:45 compute-0 sudo[157990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:45 compute-0 python3.9[157992]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:16:46 compute-0 systemd[1]: Reloading.
Nov 25 20:16:46 compute-0 systemd-sysv-generator[158025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:16:46 compute-0 systemd-rc-local-generator[158022]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:16:46 compute-0 ceph-mon[75144]: pgmap v354: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:46 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:46 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:16:46 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 20:16:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb12c51b2233512bb1cb43ae3f39f174720559d7074378d707fbb5199703de49/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb12c51b2233512bb1cb43ae3f39f174720559d7074378d707fbb5199703de49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 20:16:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b.
Nov 25 20:16:46 compute-0 podman[158033]: 2025-11-25 20:16:46.809141858 +0000 UTC m=+0.414755474 container init df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: + sudo -E kolla_set_configs
Nov 25 20:16:46 compute-0 podman[158033]: 2025-11-25 20:16:46.845144606 +0000 UTC m=+0.450758162 container start df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 20:16:46 compute-0 edpm-start-podman-container[158033]: ovn_metadata_agent
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Validating config file
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Copying service configuration files
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Writing out command to execute
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: ++ cat /run_command
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: + CMD=neutron-ovn-metadata-agent
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: + ARGS=
Nov 25 20:16:46 compute-0 ovn_metadata_agent[158048]: + sudo kolla_copy_cacerts
Nov 25 20:16:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v355: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:46 compute-0 podman[158054]: 2025-11-25 20:16:46.990682548 +0000 UTC m=+0.123135688 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 20:16:46 compute-0 edpm-start-podman-container[158032]: Creating additional drop-in dependency for "ovn_metadata_agent" (df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b)
Nov 25 20:16:47 compute-0 ovn_metadata_agent[158048]: + [[ ! -n '' ]]
Nov 25 20:16:47 compute-0 ovn_metadata_agent[158048]: + . kolla_extend_start
Nov 25 20:16:47 compute-0 ovn_metadata_agent[158048]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 20:16:47 compute-0 ovn_metadata_agent[158048]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 20:16:47 compute-0 ovn_metadata_agent[158048]: + umask 0022
Nov 25 20:16:47 compute-0 ovn_metadata_agent[158048]: + exec neutron-ovn-metadata-agent
Nov 25 20:16:47 compute-0 systemd[1]: Reloading.
Nov 25 20:16:47 compute-0 systemd-sysv-generator[158126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:16:47 compute-0 systemd-rc-local-generator[158123]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:16:47 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 25 20:16:47 compute-0 sudo[157990]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:47 compute-0 ceph-mon[75144]: pgmap v355: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:47 compute-0 sshd-session[148673]: Connection closed by 192.168.122.30 port 60872
Nov 25 20:16:47 compute-0 sshd-session[148670]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:16:47 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 20:16:47 compute-0 systemd[1]: session-48.scope: Consumed 1min 2.166s CPU time.
Nov 25 20:16:47 compute-0 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Nov 25 20:16:47 compute-0 systemd-logind[789]: Removed session 48.
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.893 158053 INFO neutron.common.config [-] Logging enabled!
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.893 158053 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.894 158053 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.894 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.894 158053 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.894 158053 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.894 158053 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.894 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.895 158053 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.896 158053 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.897 158053 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.898 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.899 158053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.900 158053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.901 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.902 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.903 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.904 158053 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.905 158053 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.906 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.907 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.908 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.909 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.910 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.911 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.912 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.913 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.914 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.915 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.916 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.917 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.918 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.919 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.920 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.921 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.922 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.923 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.924 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.924 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.924 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.924 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.924 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.924 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.925 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.926 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.927 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.928 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.929 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.930 158053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.931 158053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.941 158053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.942 158053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.942 158053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.942 158053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.942 158053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.957 158053 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954 (UUID: 53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 25 20:16:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v356: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.993 158053 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.994 158053 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.994 158053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.994 158053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 20:16:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:48.998 158053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.005 158053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.014 158053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f984c44ccd0>], external_ids={}, name=53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954, nb_cfg_timestamp=1764101735928, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.015 158053 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f984c450b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.016 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.016 158053 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.016 158053 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.016 158053 INFO oslo_service.service [-] Starting 1 workers
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.021 158053 DEBUG oslo_service.service [-] Started child 158159 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.024 158053 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpwdfp5vdw/privsep.sock']
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.026 158159 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169370'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.057 158159 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.058 158159 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.058 158159 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.066 158159 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.072 158159 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.080 158159 INFO eventlet.wsgi.server [-] (158159) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 25 20:16:49 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.732 158053 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.733 158053 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwdfp5vdw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.604 158164 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.611 158164 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.615 158164 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.615 158164 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158164
Nov 25 20:16:49 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:49.735 158164 DEBUG oslo.privsep.daemon [-] privsep: reply[ed19582e-1445-44ec-a41f-df108966662f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 20:16:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:50 compute-0 ceph-mon[75144]: pgmap v356: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.252 158164 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.253 158164 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.253 158164 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.762 158164 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4aeabf-4f68-4906-93fa-ebacd3dcfa76]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.764 158053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954, column=external_ids, values=({'neutron:ovn-metadata-id': 'dfec52ea-384d-5593-a34f-4de58d359287'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.777 158053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=53dbc5fa-5cc8-4cbc-8a85-0750fc7ff954, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.784 158053 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.784 158053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.784 158053 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.784 158053 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.784 158053 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.784 158053 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.785 158053 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.786 158053 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.787 158053 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.788 158053 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.789 158053 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.790 158053 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.791 158053 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.792 158053 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.793 158053 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.794 158053 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.795 158053 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.796 158053 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.797 158053 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.798 158053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.799 158053 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.800 158053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.801 158053 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.802 158053 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.803 158053 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.804 158053 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.805 158053 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.806 158053 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.807 158053 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.808 158053 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.809 158053 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.810 158053 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.811 158053 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.812 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.813 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.814 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.815 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:16:50 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:16:50.816 158053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:16:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v357: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:52 compute-0 ceph-mon[75144]: pgmap v357: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v358: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:53 compute-0 sshd-session[158169]: Accepted publickey for zuul from 192.168.122.30 port 42606 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:16:53 compute-0 systemd-logind[789]: New session 49 of user zuul.
Nov 25 20:16:53 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 25 20:16:53 compute-0 sshd-session[158169]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:16:54 compute-0 ceph-mon[75144]: pgmap v358: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:54 compute-0 python3.9[158322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:16:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v359: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:16:55 compute-0 sudo[158476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oorzfjbcilifibiqpnbhsjpcyaluwips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101815.1410894-34-266352841610186/AnsiballZ_command.py'
Nov 25 20:16:55 compute-0 sudo[158476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:55 compute-0 python3.9[158478]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:16:55 compute-0 sudo[158476]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:16:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 15.90 MB, 0.03 MB/s
                                           Interval WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:16:56 compute-0 ceph-mon[75144]: pgmap v359: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:16:56
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', 'vms']
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:16:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v360: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:57 compute-0 sudo[158641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cayjluliznxpvmnueotkoscuqckhcfpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101816.4198093-45-131446258075996/AnsiballZ_systemd_service.py'
Nov 25 20:16:57 compute-0 sudo[158641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:16:57 compute-0 python3.9[158643]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:16:57 compute-0 systemd[1]: Reloading.
Nov 25 20:16:57 compute-0 systemd-rc-local-generator[158669]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:16:57 compute-0 systemd-sysv-generator[158675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:16:57 compute-0 sudo[158641]: pam_unix(sudo:session): session closed for user root
Nov 25 20:16:57 compute-0 podman[158680]: 2025-11-25 20:16:57.946467092 +0000 UTC m=+0.123633779 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:16:58 compute-0 ceph-mon[75144]: pgmap v360: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:16:58 compute-0 python3.9[158855]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:16:58 compute-0 network[158872]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:16:58 compute-0 network[158873]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:16:58 compute-0 network[158874]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:16:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v361: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:00 compute-0 ceph-mon[75144]: pgmap v361: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v362: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:17:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 15.97 MB, 0.03 MB/s
                                           Interval WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:17:02 compute-0 ceph-mon[75144]: pgmap v362: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:17:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v363: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:03 compute-0 sudo[159134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulegwebzllhlnmgdxbkqglrhveyusicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101823.05377-64-243172566739673/AnsiballZ_systemd_service.py'
Nov 25 20:17:03 compute-0 sudo[159134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:03 compute-0 python3.9[159136]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:03 compute-0 sudo[159134]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:04 compute-0 ceph-mon[75144]: pgmap v363: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Check health
Nov 25 20:17:04 compute-0 sudo[159287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdlojohpzqseufqpwrsejhnibyonhuac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101824.0603626-64-221266605883149/AnsiballZ_systemd_service.py'
Nov 25 20:17:04 compute-0 sudo[159287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:04 compute-0 python3.9[159289]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:04 compute-0 sudo[159287]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v364: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:05 compute-0 sudo[159440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atsfejqblsfdodwcctkqwzzrjxmqguxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101824.9875684-64-68256387937621/AnsiballZ_systemd_service.py'
Nov 25 20:17:05 compute-0 sudo[159440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:05 compute-0 python3.9[159442]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:05 compute-0 sudo[159440]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:06 compute-0 ceph-mon[75144]: pgmap v364: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:17:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 15.56 MB, 0.03 MB/s
                                           Interval WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:17:06 compute-0 sudo[159593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklmpvfhydleffapqlplcbwpiajamlvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101825.9398024-64-228037978443758/AnsiballZ_systemd_service.py'
Nov 25 20:17:06 compute-0 sudo[159593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:06 compute-0 python3.9[159595]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:06 compute-0 sudo[159593]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v365: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:07 compute-0 sudo[159746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehrnxdjabuyuybquneynjdztgnkqwal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101826.9179592-64-195392324626223/AnsiballZ_systemd_service.py'
Nov 25 20:17:07 compute-0 sudo[159746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:07 compute-0 python3.9[159748]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:07 compute-0 sudo[159746]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:08 compute-0 ceph-mon[75144]: pgmap v365: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:08 compute-0 sudo[159899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irtauvhsyneouocastatlrloxgvbmvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101827.829039-64-80579216360374/AnsiballZ_systemd_service.py'
Nov 25 20:17:08 compute-0 sudo[159899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:08 compute-0 python3.9[159901]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:08 compute-0 sudo[159899]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v366: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:09 compute-0 sudo[160052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzdoqjsnulizgxytgcrjagqxqudisjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101828.7513902-64-257262297161869/AnsiballZ_systemd_service.py'
Nov 25 20:17:09 compute-0 sudo[160052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:09 compute-0 python3.9[160054]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:17:09 compute-0 sudo[160052]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:10 compute-0 ceph-mon[75144]: pgmap v366: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:10 compute-0 sudo[160205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlojgvdprjjfkhenyiddcaitvsedwejq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101829.9128935-116-85882704549057/AnsiballZ_file.py'
Nov 25 20:17:10 compute-0 sudo[160205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:10 compute-0 python3.9[160207]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:10 compute-0 sudo[160205]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v367: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:11 compute-0 sudo[160357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoffhmgkmicxeixqhmclayihujypyqpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101830.9328647-116-34659614596862/AnsiballZ_file.py'
Nov 25 20:17:11 compute-0 sudo[160357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:11 compute-0 python3.9[160359]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:11 compute-0 sudo[160357]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:12 compute-0 sudo[160509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbjmjuvwbddcxgfamfnamparnupzcnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101831.7535975-116-11301287274874/AnsiballZ_file.py'
Nov 25 20:17:12 compute-0 sudo[160509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:12 compute-0 ceph-mon[75144]: pgmap v367: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:12 compute-0 python3.9[160511]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:12 compute-0 sudo[160509]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:12 compute-0 sudo[160661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nojbfqxzqxezmcxegsplfwtzsukuyrrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101832.537537-116-261284810174908/AnsiballZ_file.py'
Nov 25 20:17:12 compute-0 sudo[160661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v368: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:13 compute-0 python3.9[160663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:13 compute-0 sudo[160661]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:13 compute-0 sudo[160813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjuebxfxigepcdrslocgscpnwprbsxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101833.2878656-116-268332974058403/AnsiballZ_file.py'
Nov 25 20:17:13 compute-0 sudo[160813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:13 compute-0 python3.9[160815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:13 compute-0 sudo[160813]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:14 compute-0 ceph-mon[75144]: pgmap v368: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:14 compute-0 sudo[160965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztbvgokefhzjlevwwtifsnyigtewamlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101834.1152203-116-267530851734062/AnsiballZ_file.py'
Nov 25 20:17:14 compute-0 sudo[160965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:14 compute-0 python3.9[160967]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:14 compute-0 sudo[160965]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v369: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:15 compute-0 sudo[161117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrjcfqfevityrergntmqlhacdcpmdrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101834.9442914-116-133982951366195/AnsiballZ_file.py'
Nov 25 20:17:15 compute-0 sudo[161117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:15 compute-0 python3.9[161119]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:15 compute-0 sudo[161117]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:16 compute-0 ceph-mon[75144]: pgmap v369: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:16 compute-0 sudo[161269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcvtlubiesnoytvgkohdgnyxmligmekv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101835.9609866-166-232563533005273/AnsiballZ_file.py'
Nov 25 20:17:16 compute-0 sudo[161269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:16 compute-0 python3.9[161271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:16 compute-0 sudo[161269]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v370: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:17 compute-0 sudo[161432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdwihzwhxeddsfbeoaglxuzkpsbwclv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101836.8718028-166-33929366723924/AnsiballZ_file.py'
Nov 25 20:17:17 compute-0 sudo[161432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:17 compute-0 podman[161395]: 2025-11-25 20:17:17.351787284 +0000 UTC m=+0.094789092 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:17:17 compute-0 python3.9[161440]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:17 compute-0 sudo[161432]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:18 compute-0 sudo[161593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwhsxwpctbgqvxqyymucbyxxqcrqanh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101837.7282653-166-272445530004148/AnsiballZ_file.py'
Nov 25 20:17:18 compute-0 sudo[161593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:18 compute-0 ceph-mon[75144]: pgmap v370: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:18 compute-0 python3.9[161595]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:18 compute-0 sudo[161593]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:18 compute-0 sudo[161745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgiynfaomoushrcqvprjiraxtrkzjler ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101838.544374-166-127527213309151/AnsiballZ_file.py'
Nov 25 20:17:18 compute-0 sudo[161745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v371: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:19 compute-0 ceph-mon[75144]: pgmap v371: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:19 compute-0 python3.9[161747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:19 compute-0 sudo[161745]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:19 compute-0 sudo[161897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbfnopkagvnkfdmdipiikkfbttfsokc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101839.4408026-166-154413917677794/AnsiballZ_file.py'
Nov 25 20:17:19 compute-0 sudo[161897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:20 compute-0 python3.9[161899]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:20 compute-0 sudo[161897]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:20 compute-0 sudo[162049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvpekslzivsbdonxaguydhezztkwfrcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101840.2523108-166-86495033107417/AnsiballZ_file.py'
Nov 25 20:17:20 compute-0 sudo[162049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:20 compute-0 python3.9[162051]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:20 compute-0 sudo[162049]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v372: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:21 compute-0 sudo[162201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnyhrsqssnkuornlbhijijetusgvgmdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101840.9989161-166-8577522541043/AnsiballZ_file.py'
Nov 25 20:17:21 compute-0 sudo[162201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:21 compute-0 python3.9[162203]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:17:21 compute-0 sudo[162201]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:22 compute-0 ceph-mon[75144]: pgmap v372: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:22 compute-0 sudo[162353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsnavjlwjggawjtdbksulornqvqylmzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101841.9562514-217-119116789073242/AnsiballZ_command.py'
Nov 25 20:17:22 compute-0 sudo[162353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:22 compute-0 python3.9[162355]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:22 compute-0 sudo[162353]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v373: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:23 compute-0 python3.9[162507]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:17:24 compute-0 ceph-mon[75144]: pgmap v373: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:24 compute-0 sudo[162657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxhpftssarfxqgsklldpcldjjzzszca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101843.9077156-235-249057143001059/AnsiballZ_systemd_service.py'
Nov 25 20:17:24 compute-0 sudo[162657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:24 compute-0 python3.9[162659]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:17:24 compute-0 systemd[1]: Reloading.
Nov 25 20:17:24 compute-0 systemd-rc-local-generator[162686]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:17:24 compute-0 systemd-sysv-generator[162689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:17:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v374: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:25 compute-0 sudo[162657]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:25 compute-0 sudo[162843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewttzqmusrweommpzikyfwqenfjuxoup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101845.4055011-243-52526034684847/AnsiballZ_command.py'
Nov 25 20:17:25 compute-0 sudo[162843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:26 compute-0 python3.9[162845]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:26 compute-0 sudo[162843]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:26 compute-0 ceph-mon[75144]: pgmap v374: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:26 compute-0 sudo[162996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gafzkpjbqhlsdggpzoyfvjdpuxwnyhaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101846.242166-243-77119219543198/AnsiballZ_command.py'
Nov 25 20:17:26 compute-0 sudo[162996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:17:26 compute-0 python3.9[162998]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:17:26 compute-0 sudo[162996]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v375: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:27 compute-0 sudo[163149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnmbjeyqeomkuostrcvorkjtconecbqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101847.0456107-243-54666540560900/AnsiballZ_command.py'
Nov 25 20:17:27 compute-0 sudo[163149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:27 compute-0 python3.9[163151]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:27 compute-0 sudo[163149]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:28 compute-0 ceph-mon[75144]: pgmap v375: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:28 compute-0 sudo[163313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiwztomizahfpwmjvlessvxrllhiniqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101847.8363662-243-56268103242290/AnsiballZ_command.py'
Nov 25 20:17:28 compute-0 sudo[163313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:28 compute-0 podman[163276]: 2025-11-25 20:17:28.368761151 +0000 UTC m=+0.203872526 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:17:28 compute-0 python3.9[163321]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:28 compute-0 sudo[163313]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v376: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:29 compute-0 sudo[163482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhebkmkfmwuuwwupvlujsdevwgnrvhwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101848.696883-243-218040026969484/AnsiballZ_command.py'
Nov 25 20:17:29 compute-0 sudo[163482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:29 compute-0 python3.9[163484]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:29 compute-0 sudo[163482]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:30 compute-0 ceph-mon[75144]: pgmap v376: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:30 compute-0 sudo[163635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fegbnznuxdbtmguierkcqojupidhlube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101849.7376187-243-50377074302337/AnsiballZ_command.py'
Nov 25 20:17:30 compute-0 sudo[163635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:30 compute-0 python3.9[163637]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:30 compute-0 sudo[163635]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:30 compute-0 sudo[163788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxkecfasdwttgnzglsxetgfiolltonl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101850.6464508-243-221368249277237/AnsiballZ_command.py'
Nov 25 20:17:30 compute-0 sudo[163788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v377: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:31 compute-0 python3.9[163790]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:17:31 compute-0 sudo[163788]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:32 compute-0 ceph-mon[75144]: pgmap v377: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:32 compute-0 sudo[163941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zilynfydsrmlntaaqucttliscpuvjlew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101851.7096448-297-166676072392481/AnsiballZ_getent.py'
Nov 25 20:17:32 compute-0 sudo[163941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:32 compute-0 python3.9[163943]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 20:17:32 compute-0 sudo[163941]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v378: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:33 compute-0 sudo[164094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhgyybhbbgccarzylkjqgdwvjkdlyws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101852.7155771-305-265371722161150/AnsiballZ_group.py'
Nov 25 20:17:33 compute-0 sudo[164094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:33 compute-0 python3.9[164096]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 20:17:33 compute-0 groupadd[164097]: group added to /etc/group: name=libvirt, GID=42473
Nov 25 20:17:33 compute-0 groupadd[164097]: group added to /etc/gshadow: name=libvirt
Nov 25 20:17:33 compute-0 groupadd[164097]: new group: name=libvirt, GID=42473
Nov 25 20:17:33 compute-0 sudo[164094]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:34 compute-0 ceph-mon[75144]: pgmap v378: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:34 compute-0 sudo[164252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzrweiubmdykiqimsiifweitvlwbywy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101853.8158476-313-115888889668283/AnsiballZ_user.py'
Nov 25 20:17:34 compute-0 sudo[164252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:34 compute-0 python3.9[164254]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 20:17:34 compute-0 useradd[164256]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 20:17:34 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:17:34 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:17:34 compute-0 sudo[164252]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v379: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:35 compute-0 sudo[164413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wueosksmonihkvbncjgwqcpiqfxngdta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101855.3205721-324-209289280063475/AnsiballZ_setup.py'
Nov 25 20:17:35 compute-0 sudo[164413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:36 compute-0 python3.9[164415]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:17:36 compute-0 ceph-mon[75144]: pgmap v379: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:36 compute-0 sudo[164413]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:36 compute-0 sudo[164497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzazkfvdlbwwyzfeiknswzwmpbbthtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101855.3205721-324-209289280063475/AnsiballZ_dnf.py'
Nov 25 20:17:36 compute-0 sudo[164497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:17:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v380: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:37 compute-0 python3.9[164499]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:17:38 compute-0 ceph-mon[75144]: pgmap v380: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v381: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:40 compute-0 ceph-mon[75144]: pgmap v381: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v382: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:42 compute-0 ceph-mon[75144]: pgmap v382: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v383: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:44 compute-0 ceph-mon[75144]: pgmap v383: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v384: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:45 compute-0 sudo[164510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:45 compute-0 sudo[164510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:45 compute-0 sudo[164510]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:45 compute-0 sudo[164536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:17:45 compute-0 sudo[164536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:45 compute-0 sudo[164536]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:45 compute-0 sudo[164561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:45 compute-0 sudo[164561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:45 compute-0 sudo[164561]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:45 compute-0 sudo[164586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:17:45 compute-0 sudo[164586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:46 compute-0 ceph-mon[75144]: pgmap v384: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:46 compute-0 sudo[164586]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 0e8274d9-f0fe-4ef1-a843-dd26a6718fa3 does not exist
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 525d4f37-55f5-457f-bebd-54e51e0b46e0 does not exist
Nov 25 20:17:46 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d43e4253-747b-487a-9f56-f7d492c99131 does not exist
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:17:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:17:46 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:17:46 compute-0 sudo[164643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:46 compute-0 sudo[164643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:46 compute-0 sudo[164643]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:46 compute-0 sudo[164668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:17:46 compute-0 sudo[164668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:46 compute-0 sudo[164668]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v385: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:47 compute-0 sudo[164693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:47 compute-0 sudo[164693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:47 compute-0 sudo[164693]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:47 compute-0 sudo[164718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:17:47 compute-0 sudo[164718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:17:47 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.548324776 +0000 UTC m=+0.061710890 container create 10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:17:47 compute-0 systemd[1]: Started libpod-conmon-10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26.scope.
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.520706987 +0000 UTC m=+0.034093111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:17:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.656172675 +0000 UTC m=+0.169558869 container init 10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.668987145 +0000 UTC m=+0.182373269 container start 10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.675636131 +0000 UTC m=+0.189022295 container attach 10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:17:47 compute-0 nervous_jones[164813]: 167 167
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.678448611 +0000 UTC m=+0.191834705 container died 10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:47 compute-0 systemd[1]: libpod-10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26.scope: Deactivated successfully.
Nov 25 20:17:47 compute-0 podman[164809]: 2025-11-25 20:17:47.690340847 +0000 UTC m=+0.089028280 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 20:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0c5c0ef30a0a98f8408813365262e78bbb3fcaafe4ddd22285c72fbfb2e344-merged.mount: Deactivated successfully.
Nov 25 20:17:47 compute-0 podman[164792]: 2025-11-25 20:17:47.736105239 +0000 UTC m=+0.249491363 container remove 10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:47 compute-0 systemd[1]: libpod-conmon-10ac24c3879908450454b24a11fe02bc2b7d003f45af26547ecc9060ab27ed26.scope: Deactivated successfully.
Nov 25 20:17:47 compute-0 podman[164862]: 2025-11-25 20:17:47.974870724 +0000 UTC m=+0.059869634 container create 5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 20:17:48 compute-0 systemd[1]: Started libpod-conmon-5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227.scope.
Nov 25 20:17:48 compute-0 podman[164862]: 2025-11-25 20:17:47.951679846 +0000 UTC m=+0.036678796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:17:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f04dc94532259472e9f166afbdf0914f58d289cd19318cb680beccacd2feca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f04dc94532259472e9f166afbdf0914f58d289cd19318cb680beccacd2feca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f04dc94532259472e9f166afbdf0914f58d289cd19318cb680beccacd2feca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f04dc94532259472e9f166afbdf0914f58d289cd19318cb680beccacd2feca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f04dc94532259472e9f166afbdf0914f58d289cd19318cb680beccacd2feca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:48 compute-0 podman[164862]: 2025-11-25 20:17:48.1114536 +0000 UTC m=+0.196452540 container init 5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:17:48 compute-0 podman[164862]: 2025-11-25 20:17:48.125391398 +0000 UTC m=+0.210390328 container start 5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_dewdney, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:17:48 compute-0 podman[164862]: 2025-11-25 20:17:48.130023653 +0000 UTC m=+0.215022593 container attach 5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:17:48 compute-0 ceph-mon[75144]: Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:17:48 compute-0 ceph-mon[75144]: Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:17:48 compute-0 ceph-mon[75144]: pgmap v385: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:17:48.932 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:17:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:17:48.935 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:17:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:17:48.935 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:17:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v386: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:49 compute-0 objective_dewdney[164881]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:17:49 compute-0 objective_dewdney[164881]: --> relative data size: 1.0
Nov 25 20:17:49 compute-0 objective_dewdney[164881]: --> All data devices are unavailable
Nov 25 20:17:49 compute-0 systemd[1]: libpod-5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227.scope: Deactivated successfully.
Nov 25 20:17:49 compute-0 systemd[1]: libpod-5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227.scope: Consumed 1.061s CPU time.
Nov 25 20:17:49 compute-0 podman[164862]: 2025-11-25 20:17:49.238073647 +0000 UTC m=+1.323072577 container died 5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_dewdney, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3f04dc94532259472e9f166afbdf0914f58d289cd19318cb680beccacd2feca-merged.mount: Deactivated successfully.
Nov 25 20:17:49 compute-0 podman[164862]: 2025-11-25 20:17:49.291691504 +0000 UTC m=+1.376690404 container remove 5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:17:49 compute-0 systemd[1]: libpod-conmon-5221c3a12c1d935c50b2276984c9a15af88cbd7ddfab6ddcd20c58dd1f83c227.scope: Deactivated successfully.
Nov 25 20:17:49 compute-0 sudo[164718]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:49 compute-0 sudo[164960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:49 compute-0 sudo[164960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:49 compute-0 sudo[164960]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:49 compute-0 sudo[164987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:17:49 compute-0 sudo[164987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:49 compute-0 sudo[164987]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:49 compute-0 sudo[165014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:49 compute-0 sudo[165014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:49 compute-0 sudo[165014]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:49 compute-0 sudo[165041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:17:49 compute-0 sudo[165041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:50 compute-0 podman[165114]: 2025-11-25 20:17:50.010570523 +0000 UTC m=+0.047220739 container create 3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:17:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:50 compute-0 systemd[1]: Started libpod-conmon-3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177.scope.
Nov 25 20:17:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:17:50 compute-0 podman[165114]: 2025-11-25 20:17:49.992635275 +0000 UTC m=+0.029285491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:17:50 compute-0 podman[165114]: 2025-11-25 20:17:50.107370076 +0000 UTC m=+0.144020292 container init 3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:50 compute-0 podman[165114]: 2025-11-25 20:17:50.116318549 +0000 UTC m=+0.152968745 container start 3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:17:50 compute-0 podman[165114]: 2025-11-25 20:17:50.119573521 +0000 UTC m=+0.156223717 container attach 3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:17:50 compute-0 intelligent_moser[165134]: 167 167
Nov 25 20:17:50 compute-0 systemd[1]: libpod-3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177.scope: Deactivated successfully.
Nov 25 20:17:50 compute-0 podman[165141]: 2025-11-25 20:17:50.165833545 +0000 UTC m=+0.028816840 container died 3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 20:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e413c9d57452c115b389575c63775f1287d409cf47ea6efb821a910548434c5d-merged.mount: Deactivated successfully.
Nov 25 20:17:50 compute-0 ceph-mon[75144]: pgmap v386: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:50 compute-0 podman[165141]: 2025-11-25 20:17:50.205195606 +0000 UTC m=+0.068178881 container remove 3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:17:50 compute-0 systemd[1]: libpod-conmon-3e4c2bbd9c2da311b0509f7b1486892e03f91f29d5c2f247c7995b4b9569d177.scope: Deactivated successfully.
Nov 25 20:17:50 compute-0 podman[165168]: 2025-11-25 20:17:50.407713847 +0000 UTC m=+0.073952675 container create 5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:17:50 compute-0 systemd[1]: Started libpod-conmon-5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d.scope.
Nov 25 20:17:50 compute-0 podman[165168]: 2025-11-25 20:17:50.374455458 +0000 UTC m=+0.040694276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:17:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede3e0f4efdf69f75e3735528c65d235bc60953b4b4cb26a8ab642708cfe20df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede3e0f4efdf69f75e3735528c65d235bc60953b4b4cb26a8ab642708cfe20df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede3e0f4efdf69f75e3735528c65d235bc60953b4b4cb26a8ab642708cfe20df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede3e0f4efdf69f75e3735528c65d235bc60953b4b4cb26a8ab642708cfe20df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:50 compute-0 podman[165168]: 2025-11-25 20:17:50.527429653 +0000 UTC m=+0.193668541 container init 5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:17:50 compute-0 podman[165168]: 2025-11-25 20:17:50.540218572 +0000 UTC m=+0.206457400 container start 5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:50 compute-0 podman[165168]: 2025-11-25 20:17:50.544356835 +0000 UTC m=+0.210595673 container attach 5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:17:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v387: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.205769) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101871205875, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1633, "num_deletes": 251, "total_data_size": 1789464, "memory_usage": 1820032, "flush_reason": "Manual Compaction"}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101871221182, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1747648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7235, "largest_seqno": 8867, "table_properties": {"data_size": 1740164, "index_size": 4431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14768, "raw_average_key_size": 19, "raw_value_size": 1725142, "raw_average_value_size": 2237, "num_data_blocks": 208, "num_entries": 771, "num_filter_entries": 771, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101687, "oldest_key_time": 1764101687, "file_creation_time": 1764101871, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 15445 microseconds, and 8224 cpu microseconds.
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.221249) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1747648 bytes OK
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.221278) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.222865) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.222886) EVENT_LOG_v1 {"time_micros": 1764101871222879, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.222914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1782414, prev total WAL file size 1782414, number of live WAL files 2.
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.224031) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1706KB)], [23(3832KB)]
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101871224131, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 5672617, "oldest_snapshot_seqno": -1}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2713 keys, 4560879 bytes, temperature: kUnknown
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101871255887, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4560879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4540529, "index_size": 12497, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 62818, "raw_average_key_size": 23, "raw_value_size": 4489737, "raw_average_value_size": 1654, "num_data_blocks": 562, "num_entries": 2713, "num_filter_entries": 2713, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764101871, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.256155) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4560879 bytes
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.257547) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.2 rd, 143.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 3.7 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 3227, records dropped: 514 output_compression: NoCompression
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.257567) EVENT_LOG_v1 {"time_micros": 1764101871257557, "job": 8, "event": "compaction_finished", "compaction_time_micros": 31824, "compaction_time_cpu_micros": 20462, "output_level": 6, "num_output_files": 1, "total_output_size": 4560879, "num_input_records": 3227, "num_output_records": 2713, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101871258030, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764101871258707, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.223888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.258905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.258915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.258919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.258922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:17:51 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:17:51.258926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]: {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:     "0": [
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:         {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "devices": [
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "/dev/loop3"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             ],
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_name": "ceph_lv0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_size": "21470642176",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "name": "ceph_lv0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "tags": {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cluster_name": "ceph",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.crush_device_class": "",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.encrypted": "0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osd_id": "0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.type": "block",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.vdo": "0"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             },
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "type": "block",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "vg_name": "ceph_vg0"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:         }
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:     ],
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:     "1": [
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:         {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "devices": [
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "/dev/loop4"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             ],
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_name": "ceph_lv1",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_size": "21470642176",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "name": "ceph_lv1",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "tags": {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cluster_name": "ceph",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.crush_device_class": "",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.encrypted": "0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osd_id": "1",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.type": "block",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.vdo": "0"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             },
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "type": "block",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "vg_name": "ceph_vg1"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:         }
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:     ],
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:     "2": [
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:         {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "devices": [
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "/dev/loop5"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             ],
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_name": "ceph_lv2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_size": "21470642176",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "name": "ceph_lv2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "tags": {
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.cluster_name": "ceph",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.crush_device_class": "",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.encrypted": "0",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osd_id": "2",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.type": "block",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:                 "ceph.vdo": "0"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             },
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "type": "block",
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:             "vg_name": "ceph_vg2"
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:         }
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]:     ]
Nov 25 20:17:51 compute-0 hopeful_montalcini[165188]: }
Nov 25 20:17:51 compute-0 systemd[1]: libpod-5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d.scope: Deactivated successfully.
Nov 25 20:17:51 compute-0 podman[165168]: 2025-11-25 20:17:51.388690962 +0000 UTC m=+1.054929800 container died 5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede3e0f4efdf69f75e3735528c65d235bc60953b4b4cb26a8ab642708cfe20df-merged.mount: Deactivated successfully.
Nov 25 20:17:51 compute-0 podman[165168]: 2025-11-25 20:17:51.455077697 +0000 UTC m=+1.121316495 container remove 5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:17:51 compute-0 systemd[1]: libpod-conmon-5d661322cb845f2ca6248f80ddfe5ec3cf3fbddc46e0814e733d3c1ad6e5bd7d.scope: Deactivated successfully.
Nov 25 20:17:51 compute-0 sudo[165041]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:51 compute-0 sudo[165240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:51 compute-0 sudo[165240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:51 compute-0 sudo[165240]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:51 compute-0 sudo[165266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:17:51 compute-0 sudo[165266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:51 compute-0 sudo[165266]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:51 compute-0 sudo[165294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:51 compute-0 sudo[165294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:51 compute-0 sudo[165294]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:51 compute-0 sudo[165320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:17:51 compute-0 sudo[165320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:52 compute-0 ceph-mon[75144]: pgmap v387: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.312808188 +0000 UTC m=+0.059233718 container create 77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_carson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:17:52 compute-0 systemd[1]: Started libpod-conmon-77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3.scope.
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.29120622 +0000 UTC m=+0.037631780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:17:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.425641263 +0000 UTC m=+0.172066833 container init 77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.433242052 +0000 UTC m=+0.179667612 container start 77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.436990746 +0000 UTC m=+0.183416276 container attach 77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:17:52 compute-0 musing_carson[165419]: 167 167
Nov 25 20:17:52 compute-0 systemd[1]: libpod-77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3.scope: Deactivated successfully.
Nov 25 20:17:52 compute-0 conmon[165419]: conmon 77c19c501968a97f821a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3.scope/container/memory.events
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.440089173 +0000 UTC m=+0.186514703 container died 77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_carson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a24320360ae6a6bd8719391512e1a4a56ca7a647830a13f70dd313402612150a-merged.mount: Deactivated successfully.
Nov 25 20:17:52 compute-0 podman[165400]: 2025-11-25 20:17:52.487573037 +0000 UTC m=+0.233998597 container remove 77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_carson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:17:52 compute-0 systemd[1]: libpod-conmon-77c19c501968a97f821a74427e9126b06aaaf21e30e4394fd7c374f367aa09e3.scope: Deactivated successfully.
Nov 25 20:17:52 compute-0 podman[165451]: 2025-11-25 20:17:52.706939218 +0000 UTC m=+0.059861774 container create 54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:17:52 compute-0 systemd[1]: Started libpod-conmon-54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5.scope.
Nov 25 20:17:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1d0495a458dc1eee0d9aba0db055bdc27af617827356434becdbf6f1417061/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:52 compute-0 podman[165451]: 2025-11-25 20:17:52.682278943 +0000 UTC m=+0.035201509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1d0495a458dc1eee0d9aba0db055bdc27af617827356434becdbf6f1417061/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1d0495a458dc1eee0d9aba0db055bdc27af617827356434becdbf6f1417061/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1d0495a458dc1eee0d9aba0db055bdc27af617827356434becdbf6f1417061/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:17:52 compute-0 podman[165451]: 2025-11-25 20:17:52.795927637 +0000 UTC m=+0.148850213 container init 54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:52 compute-0 podman[165451]: 2025-11-25 20:17:52.803138938 +0000 UTC m=+0.156061534 container start 54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:17:52 compute-0 podman[165451]: 2025-11-25 20:17:52.806567743 +0000 UTC m=+0.159490309 container attach 54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:17:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v388: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:53 compute-0 ceph-mon[75144]: pgmap v388: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:53 compute-0 lucid_merkle[165472]: {
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "osd_id": 2,
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "type": "bluestore"
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:     },
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "osd_id": 1,
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "type": "bluestore"
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:     },
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "osd_id": 0,
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:         "type": "bluestore"
Nov 25 20:17:53 compute-0 lucid_merkle[165472]:     }
Nov 25 20:17:53 compute-0 lucid_merkle[165472]: }
Nov 25 20:17:53 compute-0 systemd[1]: libpod-54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5.scope: Deactivated successfully.
Nov 25 20:17:53 compute-0 systemd[1]: libpod-54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5.scope: Consumed 1.127s CPU time.
Nov 25 20:17:53 compute-0 podman[165522]: 2025-11-25 20:17:53.972428589 +0000 UTC m=+0.036032330 container died 54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c1d0495a458dc1eee0d9aba0db055bdc27af617827356434becdbf6f1417061-merged.mount: Deactivated successfully.
Nov 25 20:17:54 compute-0 podman[165522]: 2025-11-25 20:17:54.048299981 +0000 UTC m=+0.111903692 container remove 54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:17:54 compute-0 systemd[1]: libpod-conmon-54d89bda288c7a00efea55bd1174d63f81f057113fad1a00cb33be82d94b3ef5.scope: Deactivated successfully.
Nov 25 20:17:54 compute-0 sudo[165320]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:17:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:17:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:17:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:17:54 compute-0 sudo[165537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:17:54 compute-0 sudo[165537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:54 compute-0 sudo[165537]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:54 compute-0 sudo[165562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:17:54 compute-0 sudo[165562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:17:54 compute-0 sudo[165562]: pam_unix(sudo:session): session closed for user root
Nov 25 20:17:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v389: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:17:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:17:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:17:56 compute-0 ceph-mon[75144]: pgmap v389: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:17:56
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'images', 'vms']
Nov 25 20:17:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:17:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v390: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:58 compute-0 ceph-mon[75144]: pgmap v390: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v391: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:17:59 compute-0 podman[165592]: 2025-11-25 20:17:59.058907042 +0000 UTC m=+0.140535846 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:18:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:00 compute-0 ceph-mon[75144]: pgmap v391: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v392: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:18:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:18:02 compute-0 ceph-mon[75144]: pgmap v392: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v393: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:04 compute-0 ceph-mon[75144]: pgmap v393: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v394: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:06 compute-0 ceph-mon[75144]: pgmap v394: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v395: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:08 compute-0 ceph-mon[75144]: pgmap v395: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:08 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:18:08 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:18:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v396: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:09 compute-0 ceph-mon[75144]: pgmap v396: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v397: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:12 compute-0 ceph-mon[75144]: pgmap v397: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v398: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:14 compute-0 ceph-mon[75144]: pgmap v398: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v399: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:16 compute-0 ceph-mon[75144]: pgmap v399: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v400: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:17 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 20:18:17 compute-0 podman[165630]: 2025-11-25 20:18:17.973717595 +0000 UTC m=+0.060437309 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 20:18:18 compute-0 ceph-mon[75144]: pgmap v400: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:18 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:18:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:18:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v401: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:20 compute-0 ceph-mon[75144]: pgmap v401: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v402: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:21 compute-0 ceph-mon[75144]: pgmap v402: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v403: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:24 compute-0 ceph-mon[75144]: pgmap v403: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v404: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:26 compute-0 ceph-mon[75144]: pgmap v404: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:18:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:18:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:18:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:18:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:18:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:18:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v405: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:28 compute-0 ceph-mon[75144]: pgmap v405: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v406: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:29 compute-0 ceph-mon[75144]: pgmap v406: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:29 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 20:18:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:30 compute-0 podman[165655]: 2025-11-25 20:18:30.348000588 +0000 UTC m=+0.157426149 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 20:18:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v407: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:32 compute-0 ceph-mon[75144]: pgmap v407: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v408: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:34 compute-0 ceph-mon[75144]: pgmap v408: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v409: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:36 compute-0 ceph-mon[75144]: pgmap v409: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v410: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:38 compute-0 ceph-mon[75144]: pgmap v410: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v411: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:40 compute-0 ceph-mon[75144]: pgmap v411: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v412: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:42 compute-0 ceph-mon[75144]: pgmap v412: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v413: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:44 compute-0 ceph-mon[75144]: pgmap v413: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v414: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:46 compute-0 ceph-mon[75144]: pgmap v414: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v415: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:48 compute-0 ceph-mon[75144]: pgmap v415: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:18:48.934 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:18:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:18:48.935 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:18:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:18:48.935 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:18:48 compute-0 podman[173757]: 2025-11-25 20:18:48.964760703 +0000 UTC m=+0.060074000 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:18:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v416: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:49 compute-0 ceph-mon[75144]: pgmap v416: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v417: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:52 compute-0 ceph-mon[75144]: pgmap v417: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v418: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:53 compute-0 ceph-mon[75144]: pgmap v418: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:54 compute-0 sudo[176242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:18:54 compute-0 sudo[176242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:54 compute-0 sudo[176242]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:54 compute-0 sudo[176304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:18:54 compute-0 sudo[176304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:54 compute-0 sudo[176304]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:54 compute-0 sudo[176364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:18:54 compute-0 sudo[176364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:54 compute-0 sudo[176364]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:54 compute-0 sudo[176434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:18:54 compute-0 sudo[176434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:18:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v419: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:55 compute-0 sudo[176434]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:18:55 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:18:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:18:55 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:18:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:18:56 compute-0 ceph-mon[75144]: pgmap v419: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:18:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:18:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 48094679-2e7f-4d8e-8ee8-a5d6a53aef1a does not exist
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev bd8d5c3f-dba3-4a83-8374-64df4b10d41b does not exist
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 2680b0c3-c4c2-4bd0-b457-77ac50ec8340 does not exist
Nov 25 20:18:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:18:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:18:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:18:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:18:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:18:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:18:56 compute-0 sudo[177354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:18:56 compute-0 sudo[177354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:56 compute-0 sudo[177354]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:56 compute-0 sudo[177419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:18:56 compute-0 sudo[177419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:56 compute-0 sudo[177419]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:56 compute-0 sudo[177474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:18:56 compute-0 sudo[177474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:56 compute-0 sudo[177474]: pam_unix(sudo:session): session closed for user root
Nov 25 20:18:56 compute-0 sudo[177539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:18:56 compute-0 sudo[177539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:18:56
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images', 'backups']
Nov 25 20:18:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:18:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v420: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:57 compute-0 podman[177800]: 2025-11-25 20:18:57.083407962 +0000 UTC m=+0.026496721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:18:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:18:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:18:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:18:57 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:18:58 compute-0 podman[177800]: 2025-11-25 20:18:58.242935406 +0000 UTC m=+1.186024195 container create c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:18:58 compute-0 systemd[1]: Started libpod-conmon-c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc.scope.
Nov 25 20:18:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:18:58 compute-0 podman[177800]: 2025-11-25 20:18:58.475531427 +0000 UTC m=+1.418620206 container init c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:18:58 compute-0 podman[177800]: 2025-11-25 20:18:58.486681126 +0000 UTC m=+1.429769875 container start c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:18:58 compute-0 musing_gagarin[178319]: 167 167
Nov 25 20:18:58 compute-0 systemd[1]: libpod-c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc.scope: Deactivated successfully.
Nov 25 20:18:58 compute-0 podman[177800]: 2025-11-25 20:18:58.532987677 +0000 UTC m=+1.476076426 container attach c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:18:58 compute-0 podman[177800]: 2025-11-25 20:18:58.535141585 +0000 UTC m=+1.478230364 container died c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9afaa2a22b540201752cb94d84ab098bb27b7c93ea10f2db55611756c2abac23-merged.mount: Deactivated successfully.
Nov 25 20:18:58 compute-0 podman[177800]: 2025-11-25 20:18:58.721105896 +0000 UTC m=+1.664194635 container remove c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:18:58 compute-0 systemd[1]: libpod-conmon-c042100bc859f4282525d78562c00ec7cda8fdf8b58c438d96ab51fa98febedc.scope: Deactivated successfully.
Nov 25 20:18:58 compute-0 podman[178583]: 2025-11-25 20:18:58.903496562 +0000 UTC m=+0.055707203 container create de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 20:18:58 compute-0 ceph-mon[75144]: pgmap v420: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:58 compute-0 systemd[1]: Started libpod-conmon-de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2.scope.
Nov 25 20:18:58 compute-0 podman[178583]: 2025-11-25 20:18:58.874385523 +0000 UTC m=+0.026596224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:18:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1547e7aa12d995f840831ae18995e5bb9fb5b3717d98a629feb4f78bc8715a97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1547e7aa12d995f840831ae18995e5bb9fb5b3717d98a629feb4f78bc8715a97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1547e7aa12d995f840831ae18995e5bb9fb5b3717d98a629feb4f78bc8715a97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1547e7aa12d995f840831ae18995e5bb9fb5b3717d98a629feb4f78bc8715a97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1547e7aa12d995f840831ae18995e5bb9fb5b3717d98a629feb4f78bc8715a97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:18:59 compute-0 podman[178583]: 2025-11-25 20:18:59.026237651 +0000 UTC m=+0.178448362 container init de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:18:59 compute-0 podman[178583]: 2025-11-25 20:18:59.039980769 +0000 UTC m=+0.192191390 container start de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:18:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v421: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:18:59 compute-0 podman[178583]: 2025-11-25 20:18:59.043906074 +0000 UTC m=+0.196116695 container attach de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:18:59 compute-0 ceph-mon[75144]: pgmap v421: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:00 compute-0 elastic_grothendieck[178651]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:19:00 compute-0 elastic_grothendieck[178651]: --> relative data size: 1.0
Nov 25 20:19:00 compute-0 elastic_grothendieck[178651]: --> All data devices are unavailable
Nov 25 20:19:00 compute-0 systemd[1]: libpod-de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2.scope: Deactivated successfully.
Nov 25 20:19:00 compute-0 systemd[1]: libpod-de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2.scope: Consumed 1.080s CPU time.
Nov 25 20:19:00 compute-0 podman[178583]: 2025-11-25 20:19:00.179942169 +0000 UTC m=+1.332152820 container died de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1547e7aa12d995f840831ae18995e5bb9fb5b3717d98a629feb4f78bc8715a97-merged.mount: Deactivated successfully.
Nov 25 20:19:00 compute-0 podman[178583]: 2025-11-25 20:19:00.259208412 +0000 UTC m=+1.411419033 container remove de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:19:00 compute-0 systemd[1]: libpod-conmon-de66630db606c04559664a2315a279b40a9ffb6e808669b3ea50f0a672895bc2.scope: Deactivated successfully.
Nov 25 20:19:00 compute-0 sudo[177539]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:00 compute-0 sudo[179243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:19:00 compute-0 sudo[179243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:00 compute-0 sudo[179243]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:00 compute-0 sudo[179318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:19:00 compute-0 sudo[179318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:00 compute-0 sudo[179318]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:00 compute-0 podman[179304]: 2025-11-25 20:19:00.593002645 +0000 UTC m=+0.143920397 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 20:19:00 compute-0 sudo[179403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:19:00 compute-0 sudo[179403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:00 compute-0 sudo[179403]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:00 compute-0 sudo[179482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:19:00 compute-0 sudo[179482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v422: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.117504506 +0000 UTC m=+0.058385655 container create 4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:19:01 compute-0 systemd[1]: Started libpod-conmon-4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84.scope.
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.094179131 +0000 UTC m=+0.035060280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:19:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.239610227 +0000 UTC m=+0.180491386 container init 4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.252429601 +0000 UTC m=+0.193310760 container start 4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.262134981 +0000 UTC m=+0.203016100 container attach 4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shtern, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:19:01 compute-0 vigorous_shtern[179793]: 167 167
Nov 25 20:19:01 compute-0 systemd[1]: libpod-4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84.scope: Deactivated successfully.
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.265882491 +0000 UTC m=+0.206763670 container died 4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:19:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a48000abb3715951af158e4fd7b952fd8c9ce815b92a16f67f46087036d0edd-merged.mount: Deactivated successfully.
Nov 25 20:19:01 compute-0 podman[179725]: 2025-11-25 20:19:01.325429817 +0000 UTC m=+0.266310936 container remove 4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shtern, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:19:01 compute-0 systemd[1]: libpod-conmon-4243829f8dbf46486ea5d7c958014f6ecee041f881bd5ae5961922b1b499bc84.scope: Deactivated successfully.
Nov 25 20:19:01 compute-0 podman[179964]: 2025-11-25 20:19:01.554143184 +0000 UTC m=+0.053303759 container create d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:19:01 compute-0 systemd[1]: Started libpod-conmon-d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf.scope.
Nov 25 20:19:01 compute-0 podman[179964]: 2025-11-25 20:19:01.52938365 +0000 UTC m=+0.028544205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:19:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8d1f5fe31535e2de60a7563990b6896f5373771efbef0230b778ebc3cf9a57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8d1f5fe31535e2de60a7563990b6896f5373771efbef0230b778ebc3cf9a57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8d1f5fe31535e2de60a7563990b6896f5373771efbef0230b778ebc3cf9a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8d1f5fe31535e2de60a7563990b6896f5373771efbef0230b778ebc3cf9a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:01 compute-0 podman[179964]: 2025-11-25 20:19:01.661820599 +0000 UTC m=+0.160981164 container init d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_snyder, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:19:01 compute-0 podman[179964]: 2025-11-25 20:19:01.671391205 +0000 UTC m=+0.170551750 container start d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:19:01 compute-0 podman[179964]: 2025-11-25 20:19:01.675120535 +0000 UTC m=+0.174281100 container attach d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:19:02 compute-0 ceph-mon[75144]: pgmap v422: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:19:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:19:02 compute-0 naughty_snyder[180035]: {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:     "0": [
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:         {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "devices": [
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "/dev/loop3"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             ],
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_name": "ceph_lv0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_size": "21470642176",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "name": "ceph_lv0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "tags": {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cluster_name": "ceph",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.crush_device_class": "",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.encrypted": "0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osd_id": "0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.type": "block",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.vdo": "0"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             },
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "type": "block",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "vg_name": "ceph_vg0"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:         }
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:     ],
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:     "1": [
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:         {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "devices": [
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "/dev/loop4"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             ],
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_name": "ceph_lv1",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_size": "21470642176",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "name": "ceph_lv1",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "tags": {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cluster_name": "ceph",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.crush_device_class": "",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.encrypted": "0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osd_id": "1",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.type": "block",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.vdo": "0"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             },
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "type": "block",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "vg_name": "ceph_vg1"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:         }
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:     ],
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:     "2": [
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:         {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "devices": [
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "/dev/loop5"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             ],
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_name": "ceph_lv2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_size": "21470642176",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "name": "ceph_lv2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "tags": {
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.cluster_name": "ceph",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.crush_device_class": "",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.encrypted": "0",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osd_id": "2",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.type": "block",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:                 "ceph.vdo": "0"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             },
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "type": "block",
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:             "vg_name": "ceph_vg2"
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:         }
Nov 25 20:19:02 compute-0 naughty_snyder[180035]:     ]
Nov 25 20:19:02 compute-0 naughty_snyder[180035]: }
Nov 25 20:19:02 compute-0 systemd[1]: libpod-d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf.scope: Deactivated successfully.
Nov 25 20:19:02 compute-0 podman[179964]: 2025-11-25 20:19:02.511658846 +0000 UTC m=+1.010819451 container died d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b8d1f5fe31535e2de60a7563990b6896f5373771efbef0230b778ebc3cf9a57-merged.mount: Deactivated successfully.
Nov 25 20:19:02 compute-0 podman[179964]: 2025-11-25 20:19:02.600772183 +0000 UTC m=+1.099932718 container remove d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_snyder, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:19:02 compute-0 systemd[1]: libpod-conmon-d8652111820272add77824156d42f1cc9a76d467d0c3bf84204f0be734db2fbf.scope: Deactivated successfully.
Nov 25 20:19:02 compute-0 sudo[179482]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:02 compute-0 sudo[180554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:19:02 compute-0 sudo[180554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:02 compute-0 sudo[180554]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:02 compute-0 sudo[180621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:19:02 compute-0 sudo[180621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:02 compute-0 sudo[180621]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:02 compute-0 sudo[180683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:19:02 compute-0 sudo[180683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:02 compute-0 sudo[180683]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:03 compute-0 sudo[180741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:19:03 compute-0 sudo[180741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v423: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.489440191 +0000 UTC m=+0.114686724 container create 3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.405326737 +0000 UTC m=+0.030573300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:19:03 compute-0 systemd[1]: Started libpod-conmon-3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f.scope.
Nov 25 20:19:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.717892551 +0000 UTC m=+0.343139114 container init 3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.726934373 +0000 UTC m=+0.352180956 container start 3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:19:03 compute-0 unruffled_ardinghelli[181136]: 167 167
Nov 25 20:19:03 compute-0 systemd[1]: libpod-3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f.scope: Deactivated successfully.
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.750546906 +0000 UTC m=+0.375793549 container attach 3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.751770849 +0000 UTC m=+0.377017422 container died 3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-af8a002dfd821adbf551c662ffce880e897e322925e206accdaddbf2ef9c4fd0-merged.mount: Deactivated successfully.
Nov 25 20:19:03 compute-0 podman[181003]: 2025-11-25 20:19:03.869309138 +0000 UTC m=+0.494555691 container remove 3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:19:03 compute-0 systemd[1]: libpod-conmon-3fe04ab82fb13fb579052ff7a3bbc143b8894bdc0a7fa886c23f6f4babaac99f.scope: Deactivated successfully.
Nov 25 20:19:04 compute-0 podman[181341]: 2025-11-25 20:19:04.055198717 +0000 UTC m=+0.034212017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:19:04 compute-0 podman[181341]: 2025-11-25 20:19:04.286258208 +0000 UTC m=+0.265271488 container create 4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:19:04 compute-0 ceph-mon[75144]: pgmap v423: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:04 compute-0 systemd[1]: Started libpod-conmon-4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02.scope.
Nov 25 20:19:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4b112a27f5bd6f0b04a67304e404612bb342f8956c67bc9c842d025f01ef70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4b112a27f5bd6f0b04a67304e404612bb342f8956c67bc9c842d025f01ef70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4b112a27f5bd6f0b04a67304e404612bb342f8956c67bc9c842d025f01ef70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4b112a27f5bd6f0b04a67304e404612bb342f8956c67bc9c842d025f01ef70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:19:04 compute-0 podman[181341]: 2025-11-25 20:19:04.380182724 +0000 UTC m=+0.359196034 container init 4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:19:04 compute-0 podman[181341]: 2025-11-25 20:19:04.386296797 +0000 UTC m=+0.365310077 container start 4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:19:04 compute-0 podman[181341]: 2025-11-25 20:19:04.389182485 +0000 UTC m=+0.368195785 container attach 4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_knuth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 20:19:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v424: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:05 compute-0 ceph-mon[75144]: pgmap v424: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:05 compute-0 confident_knuth[181513]: {
Nov 25 20:19:05 compute-0 confident_knuth[181513]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "osd_id": 2,
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "type": "bluestore"
Nov 25 20:19:05 compute-0 confident_knuth[181513]:     },
Nov 25 20:19:05 compute-0 confident_knuth[181513]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "osd_id": 1,
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "type": "bluestore"
Nov 25 20:19:05 compute-0 confident_knuth[181513]:     },
Nov 25 20:19:05 compute-0 confident_knuth[181513]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "osd_id": 0,
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:19:05 compute-0 confident_knuth[181513]:         "type": "bluestore"
Nov 25 20:19:05 compute-0 confident_knuth[181513]:     }
Nov 25 20:19:05 compute-0 confident_knuth[181513]: }
Nov 25 20:19:05 compute-0 systemd[1]: libpod-4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02.scope: Deactivated successfully.
Nov 25 20:19:05 compute-0 systemd[1]: libpod-4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02.scope: Consumed 1.067s CPU time.
Nov 25 20:19:05 compute-0 podman[181341]: 2025-11-25 20:19:05.449198543 +0000 UTC m=+1.428211833 container died 4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d4b112a27f5bd6f0b04a67304e404612bb342f8956c67bc9c842d025f01ef70-merged.mount: Deactivated successfully.
Nov 25 20:19:05 compute-0 podman[181341]: 2025-11-25 20:19:05.514174824 +0000 UTC m=+1.493188104 container remove 4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:19:05 compute-0 systemd[1]: libpod-conmon-4a93423af8cfe0e9563dcefdcfefa62ed98869b2fe9552d33c250abce1712e02.scope: Deactivated successfully.
Nov 25 20:19:05 compute-0 sudo[180741]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:19:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:19:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:19:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:19:05 compute-0 sudo[182295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:19:05 compute-0 sudo[182295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:05 compute-0 sudo[182295]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:05 compute-0 sudo[182359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:19:05 compute-0 sudo[182359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:19:05 compute-0 sudo[182359]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:19:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:19:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v425: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:07 compute-0 ceph-mon[75144]: pgmap v425: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v426: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:10 compute-0 ceph-mon[75144]: pgmap v426: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v427: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:12 compute-0 ceph-mon[75144]: pgmap v427: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v428: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:14 compute-0 ceph-mon[75144]: pgmap v428: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v429: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:16 compute-0 ceph-mon[75144]: pgmap v429: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v430: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:18 compute-0 ceph-mon[75144]: pgmap v430: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v431: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:20 compute-0 podman[183410]: 2025-11-25 20:19:20.010553074 +0000 UTC m=+0.090034053 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:19:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:20 compute-0 ceph-mon[75144]: pgmap v431: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:20 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:19:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:19:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v432: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:21 compute-0 ceph-mon[75144]: pgmap v432: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:21 compute-0 groupadd[183442]: group added to /etc/group: name=dnsmasq, GID=991
Nov 25 20:19:21 compute-0 groupadd[183442]: group added to /etc/gshadow: name=dnsmasq
Nov 25 20:19:21 compute-0 groupadd[183442]: new group: name=dnsmasq, GID=991
Nov 25 20:19:22 compute-0 useradd[183449]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 25 20:19:22 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 20:19:22 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 20:19:22 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Nov 25 20:19:23 compute-0 groupadd[183462]: group added to /etc/group: name=clevis, GID=990
Nov 25 20:19:23 compute-0 groupadd[183462]: group added to /etc/gshadow: name=clevis
Nov 25 20:19:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v433: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:23 compute-0 groupadd[183462]: new group: name=clevis, GID=990
Nov 25 20:19:23 compute-0 useradd[183469]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 25 20:19:23 compute-0 usermod[183479]: add 'clevis' to group 'tss'
Nov 25 20:19:23 compute-0 usermod[183479]: add 'clevis' to shadow group 'tss'
Nov 25 20:19:24 compute-0 ceph-mon[75144]: pgmap v433: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v434: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:25 compute-0 polkitd[43533]: Reloading rules
Nov 25 20:19:25 compute-0 polkitd[43533]: Collecting garbage unconditionally...
Nov 25 20:19:25 compute-0 polkitd[43533]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 20:19:25 compute-0 polkitd[43533]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 20:19:25 compute-0 polkitd[43533]: Finished loading, compiling and executing 3 rules
Nov 25 20:19:25 compute-0 polkitd[43533]: Reloading rules
Nov 25 20:19:25 compute-0 polkitd[43533]: Collecting garbage unconditionally...
Nov 25 20:19:25 compute-0 polkitd[43533]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 20:19:25 compute-0 polkitd[43533]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 20:19:25 compute-0 polkitd[43533]: Finished loading, compiling and executing 3 rules
Nov 25 20:19:26 compute-0 ceph-mon[75144]: pgmap v434: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:19:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:19:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:19:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:19:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:19:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:19:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v435: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:27 compute-0 groupadd[183666]: group added to /etc/group: name=ceph, GID=167
Nov 25 20:19:27 compute-0 groupadd[183666]: group added to /etc/gshadow: name=ceph
Nov 25 20:19:27 compute-0 groupadd[183666]: new group: name=ceph, GID=167
Nov 25 20:19:27 compute-0 useradd[183672]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 25 20:19:28 compute-0 ceph-mon[75144]: pgmap v435: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v436: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:30 compute-0 ceph-mon[75144]: pgmap v436: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v437: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:31 compute-0 podman[184062]: 2025-11-25 20:19:31.110126946 +0000 UTC m=+0.180569141 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:19:31 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 20:19:31 compute-0 sshd[1007]: Received signal 15; terminating.
Nov 25 20:19:31 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 20:19:31 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 20:19:31 compute-0 systemd[1]: sshd.service: Consumed 6.446s CPU time, read 32.0K from disk, written 24.0K to disk.
Nov 25 20:19:31 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 20:19:31 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 25 20:19:31 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 20:19:31 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 20:19:31 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 20:19:31 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 25 20:19:31 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 25 20:19:31 compute-0 sshd[184303]: Server listening on 0.0.0.0 port 22.
Nov 25 20:19:31 compute-0 sshd[184303]: Server listening on :: port 22.
Nov 25 20:19:31 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 25 20:19:32 compute-0 ceph-mon[75144]: pgmap v437: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v438: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:34 compute-0 ceph-mon[75144]: pgmap v438: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:19:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:19:34 compute-0 systemd[1]: Reloading.
Nov 25 20:19:34 compute-0 systemd-sysv-generator[184559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:34 compute-0 systemd-rc-local-generator[184552]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 20:19:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v439: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:35 compute-0 ceph-mon[75144]: pgmap v439: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v440: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:37 compute-0 sudo[164497]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:38 compute-0 sudo[187873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imgledphmseeahpdwcalxfqivkhbojiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101977.4148602-336-81631655905899/AnsiballZ_systemd.py'
Nov 25 20:19:38 compute-0 sudo[187873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:38 compute-0 ceph-mon[75144]: pgmap v440: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:38 compute-0 python3.9[187907]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:19:38 compute-0 systemd[1]: Reloading.
Nov 25 20:19:38 compute-0 systemd-sysv-generator[188255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:38 compute-0 systemd-rc-local-generator[188250]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:38 compute-0 sudo[187873]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v441: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:39 compute-0 sudo[189115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysbsnvpivlpzgnggcalrskkehvjjawof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101979.0363703-336-28514910113595/AnsiballZ_systemd.py'
Nov 25 20:19:39 compute-0 sudo[189115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:39 compute-0 python3.9[189137]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:19:39 compute-0 systemd[1]: Reloading.
Nov 25 20:19:39 compute-0 systemd-sysv-generator[189546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:39 compute-0 systemd-rc-local-generator[189541]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:40 compute-0 ceph-mon[75144]: pgmap v441: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:40 compute-0 sudo[189115]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:40 compute-0 sudo[190293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdgzesqqeuekkyjkuzrvdflgdsfgeaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101980.3078194-336-208389703656292/AnsiballZ_systemd.py'
Nov 25 20:19:40 compute-0 sudo[190293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:40 compute-0 python3.9[190316]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:19:41 compute-0 systemd[1]: Reloading.
Nov 25 20:19:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v442: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:41 compute-0 systemd-rc-local-generator[190740]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:41 compute-0 systemd-sysv-generator[190744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:41 compute-0 sudo[190293]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:41 compute-0 sudo[191527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomnkcykuwiodoirfvptezineyfjeagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101981.4982665-336-179180744275779/AnsiballZ_systemd.py'
Nov 25 20:19:41 compute-0 sudo[191527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:42 compute-0 python3.9[191557]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:19:42 compute-0 ceph-mon[75144]: pgmap v442: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:42 compute-0 systemd[1]: Reloading.
Nov 25 20:19:42 compute-0 systemd-rc-local-generator[191953]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:42 compute-0 systemd-sysv-generator[191960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:42 compute-0 sudo[191527]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:43 compute-0 sudo[192846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiwyxyqbkbbxmctzjddowptsouqnfoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101982.688053-365-111140132991310/AnsiballZ_systemd.py'
Nov 25 20:19:43 compute-0 sudo[192846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v443: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:43 compute-0 python3.9[192864]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:43 compute-0 systemd[1]: Reloading.
Nov 25 20:19:43 compute-0 systemd-rc-local-generator[193328]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:43 compute-0 systemd-sysv-generator[193333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:43 compute-0 sudo[192846]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:44 compute-0 ceph-mon[75144]: pgmap v443: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:44 compute-0 sudo[193977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdjvfawozdahzpmjprattewbvohbduhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101983.9719536-365-254413891326406/AnsiballZ_systemd.py'
Nov 25 20:19:44 compute-0 sudo[193977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:44 compute-0 python3.9[194007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:19:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:19:44 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.758s CPU time.
Nov 25 20:19:44 compute-0 systemd[1]: run-reeb9959f34854d549c46e8629ac01e00.service: Deactivated successfully.
Nov 25 20:19:44 compute-0 systemd[1]: Reloading.
Nov 25 20:19:44 compute-0 systemd-rc-local-generator[194078]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:44 compute-0 systemd-sysv-generator[194081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:45 compute-0 sudo[193977]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v444: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:45 compute-0 sudo[194235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efapdfslhkmjvvktouiljftjmpxewaay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101985.1732178-365-175559453092777/AnsiballZ_systemd.py'
Nov 25 20:19:45 compute-0 sudo[194235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:45 compute-0 python3.9[194237]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:46 compute-0 systemd[1]: Reloading.
Nov 25 20:19:46 compute-0 systemd-rc-local-generator[194269]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:46 compute-0 systemd-sysv-generator[194273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:46 compute-0 ceph-mon[75144]: pgmap v444: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:46 compute-0 sudo[194235]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:46 compute-0 sudo[194426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwzqrkncswzbvvnicgayzhnhvubjdsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101986.5530033-365-172671810114581/AnsiballZ_systemd.py'
Nov 25 20:19:46 compute-0 sudo[194426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v445: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:47 compute-0 python3.9[194428]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:47 compute-0 sudo[194426]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:48 compute-0 sudo[194581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzgtutkkhxmqladvrqfojgpndessdskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101987.627084-365-58978065659209/AnsiballZ_systemd.py'
Nov 25 20:19:48 compute-0 sudo[194581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:48 compute-0 ceph-mon[75144]: pgmap v445: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:48 compute-0 python3.9[194583]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:48 compute-0 systemd[1]: Reloading.
Nov 25 20:19:48 compute-0 systemd-rc-local-generator[194613]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:48 compute-0 systemd-sysv-generator[194617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:48 compute-0 sudo[194581]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:19:48.936 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:19:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:19:48.939 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:19:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:19:48.939 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:19:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v446: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:49 compute-0 sudo[194771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sednlcaksvwmcqxigwdkxebamasiagzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101988.989718-401-121672190106110/AnsiballZ_systemd.py'
Nov 25 20:19:49 compute-0 sudo[194771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:49 compute-0 python3.9[194773]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:19:49 compute-0 systemd[1]: Reloading.
Nov 25 20:19:49 compute-0 systemd-rc-local-generator[194805]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:19:49 compute-0 systemd-sysv-generator[194809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:19:50 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 20:19:50 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 20:19:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:50 compute-0 sudo[194771]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:50 compute-0 podman[194816]: 2025-11-25 20:19:50.124057263 +0000 UTC m=+0.068067020 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 20:19:50 compute-0 ceph-mon[75144]: pgmap v446: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:50 compute-0 sudo[194985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpiuzjmtbiitolnvuewvpcgefzpvdjfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101990.281325-409-226940062722426/AnsiballZ_systemd.py'
Nov 25 20:19:50 compute-0 sudo[194985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:51 compute-0 python3.9[194987]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v447: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:51 compute-0 sudo[194985]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:51 compute-0 sudo[195140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zovqxtnfnkwkgkobvrckepvackpsxdzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101991.3403244-409-257455872982594/AnsiballZ_systemd.py'
Nov 25 20:19:51 compute-0 sudo[195140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:52 compute-0 python3.9[195142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:52 compute-0 sudo[195140]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:52 compute-0 ceph-mon[75144]: pgmap v447: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:52 compute-0 sudo[195295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoffqgksjfplnkrtlpvcozrhjljojqoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101992.3210442-409-157460641337546/AnsiballZ_systemd.py'
Nov 25 20:19:52 compute-0 sudo[195295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:53 compute-0 python3.9[195297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v448: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:53 compute-0 sudo[195295]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:53 compute-0 sudo[195450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rquogbywxamyviftqisqtfnnynimdwka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101993.3032734-409-227116610480825/AnsiballZ_systemd.py'
Nov 25 20:19:53 compute-0 sudo[195450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:54 compute-0 python3.9[195452]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:54 compute-0 sudo[195450]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:54 compute-0 ceph-mon[75144]: pgmap v448: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:54 compute-0 sudo[195605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtalwbdsdimsklqfwyvfmfaezkklwmyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101994.3305597-409-261872430610375/AnsiballZ_systemd.py'
Nov 25 20:19:54 compute-0 sudo[195605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:55 compute-0 python3.9[195607]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:19:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v449: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:55 compute-0 sudo[195605]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:55 compute-0 sudo[195760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdmsmduozccgdokujygshheacdysnwys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101995.3104277-409-5776097930743/AnsiballZ_systemd.py'
Nov 25 20:19:55 compute-0 sudo[195760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:56 compute-0 python3.9[195762]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:56 compute-0 sudo[195760]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:56 compute-0 ceph-mon[75144]: pgmap v449: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:56 compute-0 sudo[195915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxlggjuvpoycgyfjyztbqeazumyxqpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101996.3371084-409-83074835196495/AnsiballZ_systemd.py'
Nov 25 20:19:56 compute-0 sudo[195915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:19:56
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'images', 'cephfs.cephfs.data', 'vms', 'volumes']
Nov 25 20:19:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:19:57 compute-0 python3.9[195917]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v450: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:57 compute-0 sudo[195915]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:57 compute-0 sudo[196070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsjufquumzoiqxxvtykwzvduecfqhpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101997.333549-409-145279446605186/AnsiballZ_systemd.py'
Nov 25 20:19:57 compute-0 sudo[196070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:58 compute-0 python3.9[196072]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:19:58 compute-0 ceph-mon[75144]: pgmap v450: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:58 compute-0 sudo[196070]: pam_unix(sudo:session): session closed for user root
Nov 25 20:19:58 compute-0 sudo[196225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzocluelidathjoteqeqwgnjwxzxwhsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764101998.416792-409-210844179106409/AnsiballZ_systemd.py'
Nov 25 20:19:58 compute-0 sudo[196225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:19:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v451: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:19:59 compute-0 python3.9[196227]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:20:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:00 compute-0 ceph-mon[75144]: pgmap v451: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:00 compute-0 sudo[196225]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:00 compute-0 sudo[196380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crwfbctwowakvkkehazwfqdcyjoucwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102000.431061-409-56400540205838/AnsiballZ_systemd.py'
Nov 25 20:20:00 compute-0 sudo[196380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v452: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:01 compute-0 python3.9[196382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:20:01 compute-0 sudo[196380]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:01 compute-0 podman[196384]: 2025-11-25 20:20:01.312427072 +0000 UTC m=+0.097074972 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:20:01 compute-0 sudo[196562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbdtyueuxlewbaibkxxjesvfchrwsgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102001.4322524-409-228188223161546/AnsiballZ_systemd.py'
Nov 25 20:20:01 compute-0 sudo[196562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:20:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:20:02 compute-0 python3.9[196564]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:20:02 compute-0 sudo[196562]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:02 compute-0 ceph-mon[75144]: pgmap v452: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:02 compute-0 sudo[196717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nakkpcgmqovvzzihsxaidklvuospmdwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102002.4682214-409-273175124967475/AnsiballZ_systemd.py'
Nov 25 20:20:02 compute-0 sudo[196717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v453: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:03 compute-0 python3.9[196719]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:20:03 compute-0 ceph-mon[75144]: pgmap v453: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:03 compute-0 sudo[196717]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:04 compute-0 sudo[196872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atzculaacidrqnuoifsdzdtrhgkogztz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102003.5937526-409-52532807103190/AnsiballZ_systemd.py'
Nov 25 20:20:04 compute-0 sudo[196872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:04 compute-0 python3.9[196874]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:20:04 compute-0 sudo[196872]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v454: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:05 compute-0 sudo[197027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luijkzlpeeoinuvnhpxnjfuryldyxpqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102004.664267-409-79549931086588/AnsiballZ_systemd.py'
Nov 25 20:20:05 compute-0 sudo[197027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:05 compute-0 python3.9[197029]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:20:05 compute-0 sudo[197027]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:05 compute-0 sudo[197057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:05 compute-0 sudo[197057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:05 compute-0 sudo[197057]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:05 compute-0 sudo[197082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:20:05 compute-0 sudo[197082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:05 compute-0 sudo[197082]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:06 compute-0 sudo[197107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:06 compute-0 sudo[197107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:06 compute-0 sudo[197107]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:06 compute-0 sudo[197155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:20:06 compute-0 sudo[197155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:06 compute-0 ceph-mon[75144]: pgmap v454: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:06 compute-0 sudo[197307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goifnplqjimxgzopedbjcktxwrihenvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102006.0523689-511-92877181265598/AnsiballZ_file.py'
Nov 25 20:20:06 compute-0 sudo[197307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:06 compute-0 python3.9[197311]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:20:06 compute-0 sudo[197307]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:06 compute-0 auditd[703]: Audit daemon rotating log files
Nov 25 20:20:06 compute-0 podman[197354]: 2025-11-25 20:20:06.745832703 +0000 UTC m=+0.089609793 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:20:06 compute-0 podman[197354]: 2025-11-25 20:20:06.859915295 +0000 UTC m=+0.203692425 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:20:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v455: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:07 compute-0 sudo[197574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnpjemzfyedqfewucxjybxaftlcygqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102006.8023844-511-135492112962041/AnsiballZ_file.py'
Nov 25 20:20:07 compute-0 sudo[197574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:07 compute-0 python3.9[197580]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:20:07 compute-0 sudo[197574]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:07 compute-0 sudo[197155]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:20:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:20:07 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:07 compute-0 sudo[197659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:07 compute-0 sudo[197659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:07 compute-0 sudo[197659]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:07 compute-0 sudo[197717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:20:07 compute-0 sudo[197717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:07 compute-0 sudo[197717]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:07 compute-0 sudo[197767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:07 compute-0 sudo[197767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:07 compute-0 sudo[197767]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:07 compute-0 sudo[197808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:20:07 compute-0 sudo[197808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:08 compute-0 sudo[197876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvafxdjaolspkyooyqymgsetxiuynngl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102007.6357791-511-278682849179954/AnsiballZ_file.py'
Nov 25 20:20:08 compute-0 sudo[197876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:08 compute-0 ceph-mon[75144]: pgmap v455: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:08 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:08 compute-0 python3.9[197878]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:20:08 compute-0 sudo[197876]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:08 compute-0 sudo[197808]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:20:08 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:20:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:20:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:20:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:20:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:08 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev eac41d86-30be-4efe-9617-94c1c7e5b6f5 does not exist
Nov 25 20:20:08 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d24fec8d-5fdb-40e5-9f6c-056a81986611 does not exist
Nov 25 20:20:08 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev dd8386b3-d91e-4687-8898-c5352a8bc014 does not exist
Nov 25 20:20:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:20:08 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:20:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:20:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:20:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:20:08 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:20:08 compute-0 sudo[197952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:08 compute-0 sudo[197952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:08 compute-0 sudo[197952]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:08 compute-0 sudo[198005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:20:08 compute-0 sudo[198005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:08 compute-0 sudo[198005]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:08 compute-0 sudo[198040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:08 compute-0 sudo[198040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:08 compute-0 sudo[198040]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:08 compute-0 sudo[198084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:20:08 compute-0 sudo[198084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:09 compute-0 sudo[198184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inoyysywbsbdergnhqksjcgrgtdtzslz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102008.5047688-511-60721557891000/AnsiballZ_file.py'
Nov 25 20:20:09 compute-0 sudo[198184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v456: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:20:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:20:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:20:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:20:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.153078302 +0000 UTC m=+0.048095510 container create d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:20:09 compute-0 systemd[1]: Started libpod-conmon-d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634.scope.
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.132620288 +0000 UTC m=+0.027637506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:20:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.260315892 +0000 UTC m=+0.155333120 container init d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:20:09 compute-0 python3.9[198191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.268566091 +0000 UTC m=+0.163583289 container start d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.271659524 +0000 UTC m=+0.166676722 container attach d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 20:20:09 compute-0 blissful_golick[198216]: 167 167
Nov 25 20:20:09 compute-0 systemd[1]: libpod-d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634.scope: Deactivated successfully.
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.277609002 +0000 UTC m=+0.172626200 container died d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:20:09 compute-0 sudo[198184]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a3111af49d7226bed8ea73ff34fe472f94b57b8523e87f4e905200e48880dd-merged.mount: Deactivated successfully.
Nov 25 20:20:09 compute-0 podman[198199]: 2025-11-25 20:20:09.32230766 +0000 UTC m=+0.217324898 container remove d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:20:09 compute-0 systemd[1]: libpod-conmon-d4ddeac2f15b1986993ccfdd15cd0470f05a238fc0f2b1ea871af710d189f634.scope: Deactivated successfully.
Nov 25 20:20:09 compute-0 podman[198278]: 2025-11-25 20:20:09.498286198 +0000 UTC m=+0.046151207 container create 6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:20:09 compute-0 systemd[1]: Started libpod-conmon-6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106.scope.
Nov 25 20:20:09 compute-0 podman[198278]: 2025-11-25 20:20:09.478990415 +0000 UTC m=+0.026855444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:20:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c8a7290778abcd1275652bcef5b9067cb2beae9d02ef2e24fc6876260b154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c8a7290778abcd1275652bcef5b9067cb2beae9d02ef2e24fc6876260b154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c8a7290778abcd1275652bcef5b9067cb2beae9d02ef2e24fc6876260b154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c8a7290778abcd1275652bcef5b9067cb2beae9d02ef2e24fc6876260b154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c8a7290778abcd1275652bcef5b9067cb2beae9d02ef2e24fc6876260b154/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:09 compute-0 podman[198278]: 2025-11-25 20:20:09.599015985 +0000 UTC m=+0.146881054 container init 6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:20:09 compute-0 podman[198278]: 2025-11-25 20:20:09.610456919 +0000 UTC m=+0.158321918 container start 6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:20:09 compute-0 podman[198278]: 2025-11-25 20:20:09.61422481 +0000 UTC m=+0.162089889 container attach 6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:09 compute-0 sudo[198411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmyhqgunqgvvxgvyrhvrkpbrismimwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102009.4700031-511-172764544996660/AnsiballZ_file.py'
Nov 25 20:20:09 compute-0 sudo[198411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:10 compute-0 python3.9[198413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:20:10 compute-0 ceph-mon[75144]: pgmap v456: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:10 compute-0 sudo[198411]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:10 compute-0 compassionate_hamilton[198331]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:20:10 compute-0 compassionate_hamilton[198331]: --> relative data size: 1.0
Nov 25 20:20:10 compute-0 compassionate_hamilton[198331]: --> All data devices are unavailable
Nov 25 20:20:10 compute-0 sudo[198587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvkeolkdsrwmqwbmvyfojdwecoodtvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102010.3304021-511-109290403782220/AnsiballZ_file.py'
Nov 25 20:20:10 compute-0 sudo[198587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:10 compute-0 systemd[1]: libpod-6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106.scope: Deactivated successfully.
Nov 25 20:20:10 compute-0 systemd[1]: libpod-6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106.scope: Consumed 1.111s CPU time.
Nov 25 20:20:10 compute-0 podman[198278]: 2025-11-25 20:20:10.779201057 +0000 UTC m=+1.327066086 container died 6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-480c8a7290778abcd1275652bcef5b9067cb2beae9d02ef2e24fc6876260b154-merged.mount: Deactivated successfully.
Nov 25 20:20:10 compute-0 podman[198278]: 2025-11-25 20:20:10.856569754 +0000 UTC m=+1.404434783 container remove 6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:10 compute-0 systemd[1]: libpod-conmon-6bc3175690cf923d05ab2926ff66b4310b9583e9edd2b09df491c3497dd96106.scope: Deactivated successfully.
Nov 25 20:20:10 compute-0 sudo[198084]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:10 compute-0 sudo[198600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:11 compute-0 sudo[198600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:11 compute-0 sudo[198600]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:11 compute-0 python3.9[198589]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:20:11 compute-0 sudo[198587]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v457: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:11 compute-0 sudo[198625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:20:11 compute-0 sudo[198625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:11 compute-0 sudo[198625]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:11 compute-0 sudo[198658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:11 compute-0 sudo[198658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:11 compute-0 sudo[198658]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:11 compute-0 sudo[198699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:20:11 compute-0 sudo[198699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.617610664 +0000 UTC m=+0.064596468 container create 0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shirley, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.587501873 +0000 UTC m=+0.034487707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:20:11 compute-0 systemd[1]: Started libpod-conmon-0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501.scope.
Nov 25 20:20:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.747222969 +0000 UTC m=+0.194208723 container init 0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.756852645 +0000 UTC m=+0.203838419 container start 0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.761713354 +0000 UTC m=+0.208699168 container attach 0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:20:11 compute-0 zealous_shirley[198853]: 167 167
Nov 25 20:20:11 compute-0 systemd[1]: libpod-0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501.scope: Deactivated successfully.
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.763563194 +0000 UTC m=+0.210548978 container died 0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b37370fa61e51827842fae6b65504c8c40ef1c8683610591da9baf45df8bf32e-merged.mount: Deactivated successfully.
Nov 25 20:20:11 compute-0 podman[198814]: 2025-11-25 20:20:11.818960276 +0000 UTC m=+0.265946050 container remove 0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:20:11 compute-0 systemd[1]: libpod-conmon-0d952c4ea8878302bd6bf7611cca47355a1a3a98e30db614660c9a485c25e501.scope: Deactivated successfully.
Nov 25 20:20:11 compute-0 sudo[198921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czebakvklmishvqahuptirgezmhqjyby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102011.264268-554-270115059238685/AnsiballZ_stat.py'
Nov 25 20:20:11 compute-0 sudo[198921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:12 compute-0 podman[198929]: 2025-11-25 20:20:12.037937977 +0000 UTC m=+0.055927008 container create 76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:12 compute-0 systemd[1]: Started libpod-conmon-76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc.scope.
Nov 25 20:20:12 compute-0 python3.9[198923]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:12 compute-0 podman[198929]: 2025-11-25 20:20:12.012382008 +0000 UTC m=+0.030371089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:20:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3819e960f781b0eac824e6ef2d974b080244b5ea5c419c042eefd6b81350a50f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3819e960f781b0eac824e6ef2d974b080244b5ea5c419c042eefd6b81350a50f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3819e960f781b0eac824e6ef2d974b080244b5ea5c419c042eefd6b81350a50f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3819e960f781b0eac824e6ef2d974b080244b5ea5c419c042eefd6b81350a50f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:12 compute-0 podman[198929]: 2025-11-25 20:20:12.140691468 +0000 UTC m=+0.158680539 container init 76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:20:12 compute-0 sudo[198921]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:12 compute-0 podman[198929]: 2025-11-25 20:20:12.152830681 +0000 UTC m=+0.170819702 container start 76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:20:12 compute-0 podman[198929]: 2025-11-25 20:20:12.156487538 +0000 UTC m=+0.174476569 container attach 76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:20:12 compute-0 ceph-mon[75144]: pgmap v457: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:12 compute-0 sudo[199075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxqwnjoopjnnlsfxyyqesznsyofltrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102011.264268-554-270115059238685/AnsiballZ_copy.py'
Nov 25 20:20:12 compute-0 sudo[199075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:12 compute-0 boring_wilson[198946]: {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:     "0": [
Nov 25 20:20:12 compute-0 boring_wilson[198946]:         {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "devices": [
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "/dev/loop3"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             ],
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_name": "ceph_lv0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_size": "21470642176",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "name": "ceph_lv0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "tags": {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cluster_name": "ceph",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.crush_device_class": "",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.encrypted": "0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osd_id": "0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.type": "block",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.vdo": "0"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             },
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "type": "block",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "vg_name": "ceph_vg0"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:         }
Nov 25 20:20:12 compute-0 boring_wilson[198946]:     ],
Nov 25 20:20:12 compute-0 boring_wilson[198946]:     "1": [
Nov 25 20:20:12 compute-0 boring_wilson[198946]:         {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "devices": [
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "/dev/loop4"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             ],
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_name": "ceph_lv1",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_size": "21470642176",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "name": "ceph_lv1",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "tags": {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cluster_name": "ceph",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.crush_device_class": "",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.encrypted": "0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osd_id": "1",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.type": "block",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.vdo": "0"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             },
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "type": "block",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "vg_name": "ceph_vg1"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:         }
Nov 25 20:20:12 compute-0 boring_wilson[198946]:     ],
Nov 25 20:20:12 compute-0 boring_wilson[198946]:     "2": [
Nov 25 20:20:12 compute-0 boring_wilson[198946]:         {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "devices": [
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "/dev/loop5"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             ],
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_name": "ceph_lv2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_size": "21470642176",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "name": "ceph_lv2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "tags": {
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.cluster_name": "ceph",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.crush_device_class": "",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.encrypted": "0",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osd_id": "2",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.type": "block",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:                 "ceph.vdo": "0"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             },
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "type": "block",
Nov 25 20:20:12 compute-0 boring_wilson[198946]:             "vg_name": "ceph_vg2"
Nov 25 20:20:12 compute-0 boring_wilson[198946]:         }
Nov 25 20:20:12 compute-0 boring_wilson[198946]:     ]
Nov 25 20:20:12 compute-0 boring_wilson[198946]: }
Nov 25 20:20:12 compute-0 systemd[1]: libpod-76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc.scope: Deactivated successfully.
Nov 25 20:20:12 compute-0 podman[198929]: 2025-11-25 20:20:12.947255748 +0000 UTC m=+0.965244819 container died 76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3819e960f781b0eac824e6ef2d974b080244b5ea5c419c042eefd6b81350a50f-merged.mount: Deactivated successfully.
Nov 25 20:20:13 compute-0 podman[198929]: 2025-11-25 20:20:13.045368956 +0000 UTC m=+1.063357987 container remove 76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:20:13 compute-0 python3.9[199079]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102011.264268-554-270115059238685/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:13 compute-0 systemd[1]: libpod-conmon-76da3dac34cb69c2fb5a15472dce39eb80735424d1f6a9d925e213f75a951fdc.scope: Deactivated successfully.
Nov 25 20:20:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v458: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:13 compute-0 sudo[199075]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:13 compute-0 sudo[198699]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:13 compute-0 sudo[199092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:13 compute-0 sudo[199092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:13 compute-0 sudo[199092]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:13 compute-0 sudo[199141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:20:13 compute-0 sudo[199141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:13 compute-0 sudo[199141]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:13 compute-0 sudo[199180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:13 compute-0 sudo[199180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:13 compute-0 sudo[199180]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:13 compute-0 sudo[199239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:20:13 compute-0 sudo[199239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:13 compute-0 sudo[199355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhlsxehsvhpfzlpxqkxgvcgjkgptzsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102013.277243-554-12243936183717/AnsiballZ_stat.py'
Nov 25 20:20:13 compute-0 sudo[199355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.813422062 +0000 UTC m=+0.050810021 container create eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:20:13 compute-0 systemd[1]: Started libpod-conmon-eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81.scope.
Nov 25 20:20:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:20:13 compute-0 python3.9[199362]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.79377969 +0000 UTC m=+0.031167709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.903290122 +0000 UTC m=+0.140678171 container init eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.90924866 +0000 UTC m=+0.146636619 container start eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:20:13 compute-0 keen_shtern[199400]: 167 167
Nov 25 20:20:13 compute-0 systemd[1]: libpod-eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81.scope: Deactivated successfully.
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.915924157 +0000 UTC m=+0.153312216 container attach eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.91677755 +0000 UTC m=+0.154165519 container died eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a74fc33ae25d19550397b56117486782c3f59e706c6bf5570a3cf9e8a35f9fac-merged.mount: Deactivated successfully.
Nov 25 20:20:13 compute-0 sudo[199355]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:13 compute-0 podman[199384]: 2025-11-25 20:20:13.958844738 +0000 UTC m=+0.196232717 container remove eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:20:13 compute-0 systemd[1]: libpod-conmon-eeb164d63e6eed57f20e2c7672f4e6242bc84ffcc7b80017ea0cf968fca83f81.scope: Deactivated successfully.
Nov 25 20:20:14 compute-0 ceph-mon[75144]: pgmap v458: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:14 compute-0 podman[199472]: 2025-11-25 20:20:14.196104975 +0000 UTC m=+0.075187380 container create afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 20:20:14 compute-0 systemd[1]: Started libpod-conmon-afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb.scope.
Nov 25 20:20:14 compute-0 podman[199472]: 2025-11-25 20:20:14.160668353 +0000 UTC m=+0.039750808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:20:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941c957c06806b06748898e29d54cf921e64cd0bac93b254326cb6f61e5887a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941c957c06806b06748898e29d54cf921e64cd0bac93b254326cb6f61e5887a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941c957c06806b06748898e29d54cf921e64cd0bac93b254326cb6f61e5887a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941c957c06806b06748898e29d54cf921e64cd0bac93b254326cb6f61e5887a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:20:14 compute-0 podman[199472]: 2025-11-25 20:20:14.313507116 +0000 UTC m=+0.192589581 container init afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:20:14 compute-0 podman[199472]: 2025-11-25 20:20:14.33208542 +0000 UTC m=+0.211167815 container start afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:20:14 compute-0 podman[199472]: 2025-11-25 20:20:14.336087926 +0000 UTC m=+0.215170321 container attach afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elbakyan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:20:14 compute-0 sudo[199569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwbsjqdvvgpwjjbpvowztwetkofxyqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102013.277243-554-12243936183717/AnsiballZ_copy.py'
Nov 25 20:20:14 compute-0 sudo[199569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:14 compute-0 python3.9[199571]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102013.277243-554-12243936183717/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:14 compute-0 sudo[199569]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v459: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:15 compute-0 sudo[199738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgaulpecnycrwdzesdpqyogbjkpeoagz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102014.8502378-554-205145513993663/AnsiballZ_stat.py'
Nov 25 20:20:15 compute-0 sudo[199738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:15 compute-0 great_elbakyan[199515]: {
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "osd_id": 2,
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "type": "bluestore"
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:     },
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "osd_id": 1,
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "type": "bluestore"
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:     },
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "osd_id": 0,
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:         "type": "bluestore"
Nov 25 20:20:15 compute-0 great_elbakyan[199515]:     }
Nov 25 20:20:15 compute-0 great_elbakyan[199515]: }
Nov 25 20:20:15 compute-0 systemd[1]: libpod-afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb.scope: Deactivated successfully.
Nov 25 20:20:15 compute-0 systemd[1]: libpod-afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb.scope: Consumed 1.128s CPU time.
Nov 25 20:20:15 compute-0 podman[199472]: 2025-11-25 20:20:15.452134732 +0000 UTC m=+1.331217197 container died afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:20:15 compute-0 python3.9[199740]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-941c957c06806b06748898e29d54cf921e64cd0bac93b254326cb6f61e5887a6-merged.mount: Deactivated successfully.
Nov 25 20:20:15 compute-0 podman[199472]: 2025-11-25 20:20:15.519846973 +0000 UTC m=+1.398929338 container remove afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elbakyan, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:20:15 compute-0 sudo[199738]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:15 compute-0 systemd[1]: libpod-conmon-afcc6fa2eaee9b642dda6b03bb7c3fdfb4a939dfc208e58509f23977fc5419cb.scope: Deactivated successfully.
Nov 25 20:20:15 compute-0 sudo[199239]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:20:15 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:20:15 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:15 compute-0 sudo[199784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:20:15 compute-0 sudo[199784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:15 compute-0 sudo[199784]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:15 compute-0 sudo[199836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:20:15 compute-0 sudo[199836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:20:15 compute-0 sudo[199836]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:16 compute-0 sudo[199934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipvqllueszacpcbqzwmdsrhoggsuonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102014.8502378-554-205145513993663/AnsiballZ_copy.py'
Nov 25 20:20:16 compute-0 sudo[199934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:16 compute-0 ceph-mon[75144]: pgmap v459: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:16 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:16 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:20:16 compute-0 python3.9[199936]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102014.8502378-554-205145513993663/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:16 compute-0 sudo[199934]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:16 compute-0 sudo[200086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcicrdourbivhdndnhamzxplngrvvpgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102016.5175772-554-152541383577139/AnsiballZ_stat.py'
Nov 25 20:20:16 compute-0 sudo[200086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v460: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:17 compute-0 python3.9[200088]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:17 compute-0 sudo[200086]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:17 compute-0 sudo[200211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgexbdweigzgtyczcemhqkmpsrdxooqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102016.5175772-554-152541383577139/AnsiballZ_copy.py'
Nov 25 20:20:17 compute-0 sudo[200211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:17 compute-0 python3.9[200213]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102016.5175772-554-152541383577139/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:17 compute-0 sudo[200211]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:18 compute-0 ceph-mon[75144]: pgmap v460: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:18 compute-0 sudo[200363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymwfzspqajsqzuojfkozzixsckgrkgag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102018.0882454-554-52954389263301/AnsiballZ_stat.py'
Nov 25 20:20:18 compute-0 sudo[200363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:18 compute-0 python3.9[200365]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:18 compute-0 sudo[200363]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v461: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:19 compute-0 sudo[200488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spnnvckaexkifvvitijlvfxklrcrxcos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102018.0882454-554-52954389263301/AnsiballZ_copy.py'
Nov 25 20:20:19 compute-0 sudo[200488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:19 compute-0 python3.9[200490]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102018.0882454-554-52954389263301/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:19 compute-0 sudo[200488]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:20 compute-0 sudo[200640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luagrbpjhlvtnrulocxpmhxmqukzebzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102019.7669535-554-276646682985642/AnsiballZ_stat.py'
Nov 25 20:20:20 compute-0 sudo[200640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:20 compute-0 ceph-mon[75144]: pgmap v461: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:20 compute-0 podman[200642]: 2025-11-25 20:20:20.288153514 +0000 UTC m=+0.083973383 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:20:20 compute-0 python3.9[200643]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:20 compute-0 sudo[200640]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:20 compute-0 sudo[200784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itiweemhuwgcaififmxnnsdahelxygsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102019.7669535-554-276646682985642/AnsiballZ_copy.py'
Nov 25 20:20:20 compute-0 sudo[200784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v462: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:21 compute-0 python3.9[200786]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102019.7669535-554-276646682985642/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:21 compute-0 sudo[200784]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:21 compute-0 sudo[200936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvgpngmzgobecirgtnpelyyhuoygnhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102021.3765738-554-111364597563357/AnsiballZ_stat.py'
Nov 25 20:20:21 compute-0 sudo[200936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:21 compute-0 python3.9[200938]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:22 compute-0 sudo[200936]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:22 compute-0 ceph-mon[75144]: pgmap v462: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:22 compute-0 sudo[201059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrblvyqkhzpoawtfeepapcedzrzugang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102021.3765738-554-111364597563357/AnsiballZ_copy.py'
Nov 25 20:20:22 compute-0 sudo[201059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:22 compute-0 python3.9[201061]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102021.3765738-554-111364597563357/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:22 compute-0 sudo[201059]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v463: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:23 compute-0 sudo[201211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vadjqykhvyjjwsmwfjexnnglliisrsci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102022.9018939-554-64379872053814/AnsiballZ_stat.py'
Nov 25 20:20:23 compute-0 sudo[201211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:23 compute-0 python3.9[201213]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:23 compute-0 sudo[201211]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:24 compute-0 sudo[201336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljmtvzvzqwlncyegkosgclvhfpiesina ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102022.9018939-554-64379872053814/AnsiballZ_copy.py'
Nov 25 20:20:24 compute-0 sudo[201336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:24 compute-0 ceph-mon[75144]: pgmap v463: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:24 compute-0 python3.9[201338]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764102022.9018939-554-64379872053814/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:24 compute-0 sudo[201336]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:24 compute-0 sudo[201488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqcgmujkqwgwglsupfociwtandpsemwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102024.5646734-667-12597099001542/AnsiballZ_command.py'
Nov 25 20:20:24 compute-0 sudo[201488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v464: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:25 compute-0 python3.9[201490]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 20:20:25 compute-0 sudo[201488]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:25 compute-0 sudo[201641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnrmtricnxxvxvthfhdedtexzqmbrcqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102025.4541435-676-146211224186946/AnsiballZ_file.py'
Nov 25 20:20:25 compute-0 sudo[201641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:26 compute-0 python3.9[201643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:26 compute-0 sudo[201641]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:26 compute-0 ceph-mon[75144]: pgmap v464: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:26 compute-0 sudo[201793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjtnpsukoqtnlciiqzfzbuwrcsqqmiiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102026.2354808-676-91895638809659/AnsiballZ_file.py'
Nov 25 20:20:26 compute-0 sudo[201793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:26 compute-0 python3.9[201795]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:26 compute-0 sudo[201793]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:20:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:20:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:20:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:20:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:20:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:20:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v465: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:27 compute-0 sudo[201945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrlwjkwwaiwyxygwaxdizyislnfkshux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102027.003018-676-165633824269409/AnsiballZ_file.py'
Nov 25 20:20:27 compute-0 sudo[201945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:27 compute-0 python3.9[201947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:27 compute-0 sudo[201945]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:28 compute-0 ceph-mon[75144]: pgmap v465: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:28 compute-0 sudo[202097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkjqeaacvzzcgloiwxcsulogxzmddiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102027.9020789-676-18789906075953/AnsiballZ_file.py'
Nov 25 20:20:28 compute-0 sudo[202097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:28 compute-0 python3.9[202099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:28 compute-0 sudo[202097]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v466: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:29 compute-0 sudo[202249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgedhlijrdmxqjpiokubcspnfixgqfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102028.763337-676-269444676564729/AnsiballZ_file.py'
Nov 25 20:20:29 compute-0 sudo[202249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:29 compute-0 python3.9[202251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:29 compute-0 sudo[202249]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:29 compute-0 sudo[202401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlnbbfwxyonnnwvywbntmllttjwvhctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102029.6247709-676-96774267743791/AnsiballZ_file.py'
Nov 25 20:20:29 compute-0 sudo[202401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:30 compute-0 python3.9[202403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:30 compute-0 sudo[202401]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:30 compute-0 ceph-mon[75144]: pgmap v466: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:30 compute-0 sudo[202553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrsnnmmqwfjkpdynclaxwgecdeovxecj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102030.3395264-676-28595394830511/AnsiballZ_file.py'
Nov 25 20:20:30 compute-0 sudo[202553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:30 compute-0 python3.9[202555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:30 compute-0 sudo[202553]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v467: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:31 compute-0 ceph-mon[75144]: pgmap v467: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:31 compute-0 sudo[202716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npvrfkqiaxucppgaizkuwifownqamehd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102031.102202-676-11749288472088/AnsiballZ_file.py'
Nov 25 20:20:31 compute-0 sudo[202716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:31 compute-0 podman[202679]: 2025-11-25 20:20:31.585872959 +0000 UTC m=+0.140100665 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:20:31 compute-0 python3.9[202723]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:31 compute-0 sudo[202716]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:32 compute-0 sudo[202883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shurxssqyhcrzvxvomqcfhmjuutqlyjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102031.9111621-676-195986515362682/AnsiballZ_file.py'
Nov 25 20:20:32 compute-0 sudo[202883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:32 compute-0 python3.9[202885]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:32 compute-0 sudo[202883]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:33 compute-0 sudo[203035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srnpecuyotvmjwaenrixoovdiytrcrgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102032.6884935-676-42462330817757/AnsiballZ_file.py'
Nov 25 20:20:33 compute-0 sudo[203035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v468: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:33 compute-0 python3.9[203037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:33 compute-0 sudo[203035]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:33 compute-0 sudo[203187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gudbatkdzzimpqsytbkgbsweucdtsoki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102033.4420466-676-245387907136455/AnsiballZ_file.py'
Nov 25 20:20:33 compute-0 sudo[203187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:34 compute-0 python3.9[203189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:34 compute-0 sudo[203187]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:34 compute-0 ceph-mon[75144]: pgmap v468: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:34 compute-0 sudo[203339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpimyigyxhoqdtyacsraybwfijmcqppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102034.2779777-676-113554532783298/AnsiballZ_file.py'
Nov 25 20:20:34 compute-0 sudo[203339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:34 compute-0 python3.9[203341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:34 compute-0 sudo[203339]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v469: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:35 compute-0 sudo[203491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzdmruuzwdrvnrfbmztswhkaoeqiwygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102035.0643442-676-30607089739275/AnsiballZ_file.py'
Nov 25 20:20:35 compute-0 sudo[203491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:35 compute-0 python3.9[203493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:35 compute-0 sudo[203491]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:36 compute-0 ceph-mon[75144]: pgmap v469: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:36 compute-0 sudo[203643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqljmanwiygjwhjnuzbchbmnyftusyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102035.877592-676-4378419965083/AnsiballZ_file.py'
Nov 25 20:20:36 compute-0 sudo[203643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:36 compute-0 python3.9[203645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:36 compute-0 sudo[203643]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v470: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:37 compute-0 sudo[203795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqrnsptdnkxwejrckvzqdineundklgzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102036.797263-775-1395608723147/AnsiballZ_stat.py'
Nov 25 20:20:37 compute-0 sudo[203795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:37 compute-0 python3.9[203797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:37 compute-0 sudo[203795]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:37 compute-0 sudo[203918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgmjxbeajhqlzlbonefzeutruylcrnii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102036.797263-775-1395608723147/AnsiballZ_copy.py'
Nov 25 20:20:37 compute-0 sudo[203918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:38 compute-0 python3.9[203920]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102036.797263-775-1395608723147/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:38 compute-0 sudo[203918]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:38 compute-0 ceph-mon[75144]: pgmap v470: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:38 compute-0 sudo[204070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlescoeefdgczukkzawfluoxjfderpbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102038.2310748-775-95470306601017/AnsiballZ_stat.py'
Nov 25 20:20:38 compute-0 sudo[204070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:38 compute-0 python3.9[204072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:38 compute-0 sudo[204070]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v471: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:39 compute-0 sudo[204193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjlgkpfwqsrqlbtdpguddxvhkenrnew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102038.2310748-775-95470306601017/AnsiballZ_copy.py'
Nov 25 20:20:39 compute-0 sudo[204193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:39 compute-0 python3.9[204195]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102038.2310748-775-95470306601017/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:39 compute-0 sudo[204193]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:40 compute-0 sudo[204345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggdwwmihoeppqiztlhmrkvmvyeixrat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102039.6835387-775-206569133053062/AnsiballZ_stat.py'
Nov 25 20:20:40 compute-0 sudo[204345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:40 compute-0 ceph-mon[75144]: pgmap v471: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:40 compute-0 python3.9[204347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:40 compute-0 sudo[204345]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:40 compute-0 sudo[204468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqyqaajuxdxttsekdwpsrtnbycuruzan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102039.6835387-775-206569133053062/AnsiballZ_copy.py'
Nov 25 20:20:40 compute-0 sudo[204468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:41 compute-0 python3.9[204470]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102039.6835387-775-206569133053062/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:41 compute-0 sudo[204468]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v472: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:41 compute-0 sudo[204620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxacufsyxizxytizhmenyccdejqseiaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102041.2340515-775-75171376317385/AnsiballZ_stat.py'
Nov 25 20:20:41 compute-0 sudo[204620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:41 compute-0 python3.9[204622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:41 compute-0 sudo[204620]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:42 compute-0 ceph-mon[75144]: pgmap v472: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:42 compute-0 sudo[204743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efhqjpsgwgmlsjveydohlxwsxylukxkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102041.2340515-775-75171376317385/AnsiballZ_copy.py'
Nov 25 20:20:42 compute-0 sudo[204743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:42 compute-0 python3.9[204745]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102041.2340515-775-75171376317385/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:42 compute-0 sudo[204743]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v473: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:43 compute-0 sudo[204895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgizziartcxmqikyxjzxniargmesudep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102042.7674973-775-221938890310313/AnsiballZ_stat.py'
Nov 25 20:20:43 compute-0 sudo[204895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:43 compute-0 python3.9[204897]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:43 compute-0 sudo[204895]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:43 compute-0 sudo[205018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhghtqxrytyoefngmiripndwobwpsodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102042.7674973-775-221938890310313/AnsiballZ_copy.py'
Nov 25 20:20:43 compute-0 sudo[205018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:44 compute-0 python3.9[205020]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102042.7674973-775-221938890310313/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:44 compute-0 sudo[205018]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:44 compute-0 ceph-mon[75144]: pgmap v473: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:44 compute-0 sudo[205170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrdfveizssjnrmjlqsxojhnyhxlziazj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102044.375296-775-130088371388414/AnsiballZ_stat.py'
Nov 25 20:20:44 compute-0 sudo[205170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:44 compute-0 python3.9[205172]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:44 compute-0 sudo[205170]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v474: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:45 compute-0 sudo[205293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuooyitblqpvhhncjqeaogyjlytpflaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102044.375296-775-130088371388414/AnsiballZ_copy.py'
Nov 25 20:20:45 compute-0 sudo[205293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:45 compute-0 python3.9[205295]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102044.375296-775-130088371388414/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:45 compute-0 sudo[205293]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:46 compute-0 ceph-mon[75144]: pgmap v474: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:46 compute-0 sudo[205445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbnhkegtbtzjfydeuwbtfpvspygtlljz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102045.8990853-775-156675035814628/AnsiballZ_stat.py'
Nov 25 20:20:46 compute-0 sudo[205445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:46 compute-0 python3.9[205447]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:46 compute-0 sudo[205445]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:46 compute-0 sudo[205568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlsyhczbahggwgyhkgafqqmmcqkpqlvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102045.8990853-775-156675035814628/AnsiballZ_copy.py'
Nov 25 20:20:46 compute-0 sudo[205568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v475: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:47 compute-0 python3.9[205570]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102045.8990853-775-156675035814628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:47 compute-0 sudo[205568]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:47 compute-0 sudo[205720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdxxzjmofffewwzjyddrjzxrftmagvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102047.3815079-775-273571327823029/AnsiballZ_stat.py'
Nov 25 20:20:47 compute-0 sudo[205720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:48 compute-0 python3.9[205722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:48 compute-0 sudo[205720]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:48 compute-0 ceph-mon[75144]: pgmap v475: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:48 compute-0 sudo[205843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxcpzfuhfkglnrrgedsxdaddowprjtet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102047.3815079-775-273571327823029/AnsiballZ_copy.py'
Nov 25 20:20:48 compute-0 sudo[205843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:48 compute-0 python3.9[205845]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102047.3815079-775-273571327823029/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:48 compute-0 sudo[205843]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:20:48.937 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:20:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:20:48.937 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:20:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:20:48.938 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:20:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v476: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:49 compute-0 sudo[205995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfvfwgjtolhevroyhksstzwusckccll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102048.8666227-775-67965605937592/AnsiballZ_stat.py'
Nov 25 20:20:49 compute-0 sudo[205995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:49 compute-0 python3.9[205997]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:49 compute-0 sudo[205995]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:49 compute-0 sudo[206118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyaaipvnnsbkibwqljewsaaihbdbloav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102048.8666227-775-67965605937592/AnsiballZ_copy.py'
Nov 25 20:20:49 compute-0 sudo[206118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:50 compute-0 python3.9[206120]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102048.8666227-775-67965605937592/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:50 compute-0 ceph-mon[75144]: pgmap v476: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:50 compute-0 sudo[206118]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:50 compute-0 sudo[206283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqagtmomblhfaskjiascuggvnipqgmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102050.4275732-775-10400986681624/AnsiballZ_stat.py'
Nov 25 20:20:50 compute-0 sudo[206283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:50 compute-0 podman[206244]: 2025-11-25 20:20:50.887395998 +0000 UTC m=+0.093873025 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:20:51 compute-0 python3.9[206291]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v477: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:51 compute-0 sudo[206283]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:51 compute-0 sudo[206412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cogjeurnmaycsppwdrwgnowcmlddrvsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102050.4275732-775-10400986681624/AnsiballZ_copy.py'
Nov 25 20:20:51 compute-0 sudo[206412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:51 compute-0 python3.9[206414]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102050.4275732-775-10400986681624/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:51 compute-0 sudo[206412]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:52 compute-0 ceph-mon[75144]: pgmap v477: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:52 compute-0 sudo[206564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kukyblnnohheutbbzbhznueonnwkgzqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102052.0293581-775-85827128248450/AnsiballZ_stat.py'
Nov 25 20:20:52 compute-0 sudo[206564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:52 compute-0 python3.9[206566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:52 compute-0 sudo[206564]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v478: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:53 compute-0 sudo[206687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-appbwzdebfgapuctdtjayipmvxetmsem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102052.0293581-775-85827128248450/AnsiballZ_copy.py'
Nov 25 20:20:53 compute-0 sudo[206687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:53 compute-0 python3.9[206689]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102052.0293581-775-85827128248450/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:53 compute-0 sudo[206687]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:53 compute-0 sudo[206839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzecroycufvxakbbfwrunjhpcqhobbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102053.521947-775-187985469345995/AnsiballZ_stat.py'
Nov 25 20:20:53 compute-0 sudo[206839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:54 compute-0 python3.9[206841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:54 compute-0 sudo[206839]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:54 compute-0 ceph-mon[75144]: pgmap v478: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:54 compute-0 sudo[206962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifepdzyogfixpfbirosukugeauwhkqey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102053.521947-775-187985469345995/AnsiballZ_copy.py'
Nov 25 20:20:54 compute-0 sudo[206962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:54 compute-0 python3.9[206964]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102053.521947-775-187985469345995/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:54 compute-0 sudo[206962]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:20:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v479: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:55 compute-0 sudo[207114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-achwrkjgmluxhordiggwffupxlacbqfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102054.9997633-775-176496235353629/AnsiballZ_stat.py'
Nov 25 20:20:55 compute-0 sudo[207114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:55 compute-0 python3.9[207116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:55 compute-0 sudo[207114]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:56 compute-0 sudo[207237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loefnbfkdakgjbpsyemzryzwklbsplea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102054.9997633-775-176496235353629/AnsiballZ_copy.py'
Nov 25 20:20:56 compute-0 sudo[207237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:56 compute-0 ceph-mon[75144]: pgmap v479: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:56 compute-0 python3.9[207239]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102054.9997633-775-176496235353629/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:56 compute-0 sudo[207237]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:20:56 compute-0 sudo[207389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghxzgesnhipajvmsswjopcliqdlzhihd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102056.4343364-775-195515607073790/AnsiballZ_stat.py'
Nov 25 20:20:56 compute-0 sudo[207389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:20:56
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', '.mgr']
Nov 25 20:20:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:20:57 compute-0 python3.9[207391]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:20:57 compute-0 sudo[207389]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v480: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:57 compute-0 sudo[207512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsfiktewscedprxbraqrnmoqqgzbkgcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102056.4343364-775-195515607073790/AnsiballZ_copy.py'
Nov 25 20:20:57 compute-0 sudo[207512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:57 compute-0 ceph-mgr[75443]: client.0 ms_handle_reset on v2:192.168.122.100:6800/446496168
Nov 25 20:20:57 compute-0 python3.9[207514]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102056.4343364-775-195515607073790/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:20:57 compute-0 sudo[207512]: pam_unix(sudo:session): session closed for user root
Nov 25 20:20:58 compute-0 ceph-mon[75144]: pgmap v480: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:58 compute-0 python3.9[207664]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:20:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v481: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:20:59 compute-0 sudo[207817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yilvjmxdyjivvbfmvcylnxuxwvtfupsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102058.9779272-981-64482477942464/AnsiballZ_seboolean.py'
Nov 25 20:20:59 compute-0 sudo[207817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:20:59 compute-0 python3.9[207819]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 20:21:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:00 compute-0 ceph-mon[75144]: pgmap v481: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:01 compute-0 sudo[207817]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v482: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:01 compute-0 ceph-mon[75144]: pgmap v482: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:01 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 20:21:01 compute-0 sudo[207973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqcpslxfwepoxrzhveuldqxpjykdhrkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102061.2948406-989-76011729952805/AnsiballZ_copy.py'
Nov 25 20:21:01 compute-0 sudo[207973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:01 compute-0 podman[207975]: 2025-11-25 20:21:01.865246821 +0000 UTC m=+0.141043468 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:21:01 compute-0 python3.9[207976]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:01 compute-0 sudo[207973]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:21:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:21:02 compute-0 sudo[208152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnxtlnpabksbheqbbfgtuskskgsjgjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102062.1404433-989-244646616975575/AnsiballZ_copy.py'
Nov 25 20:21:02 compute-0 sudo[208152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:02 compute-0 python3.9[208154]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:02 compute-0 sudo[208152]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v483: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:03 compute-0 sudo[208304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmllxdjzhzkoavyjioslrqvyywajgdsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102062.9914649-989-226730090513863/AnsiballZ_copy.py'
Nov 25 20:21:03 compute-0 sudo[208304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:03 compute-0 python3.9[208306]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:03 compute-0 sudo[208304]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:04 compute-0 ceph-mon[75144]: pgmap v483: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:04 compute-0 sudo[208456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-todtybmealeyitdmhuefzktwdyuhmqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102063.8176844-989-232245027414322/AnsiballZ_copy.py'
Nov 25 20:21:04 compute-0 sudo[208456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:04 compute-0 python3.9[208458]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:04 compute-0 sudo[208456]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:04 compute-0 sudo[208608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abryhensdnvfpjauolkojglodxdeaqwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102064.602881-989-133320916831516/AnsiballZ_copy.py'
Nov 25 20:21:04 compute-0 sudo[208608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v484: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:05 compute-0 python3.9[208610]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:05 compute-0 sudo[208608]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:05 compute-0 sudo[208760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmjgmtpxgexglwavuafsvksqtzjuiatp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102065.4098802-1025-159588634846463/AnsiballZ_copy.py'
Nov 25 20:21:05 compute-0 sudo[208760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:05 compute-0 python3.9[208762]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:06 compute-0 sudo[208760]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:06 compute-0 ceph-mon[75144]: pgmap v484: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:06 compute-0 sudo[208912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuqukkqwcnnunwatuhiayraakaxohbsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102066.18994-1025-20689549563473/AnsiballZ_copy.py'
Nov 25 20:21:06 compute-0 sudo[208912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:06 compute-0 python3.9[208914]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:06 compute-0 sudo[208912]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v485: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:07 compute-0 sudo[209064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skricqbugisazghjowfvidlbfrfyvbys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102067.1041741-1025-148927048967942/AnsiballZ_copy.py'
Nov 25 20:21:07 compute-0 sudo[209064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:07 compute-0 python3.9[209066]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:07 compute-0 sudo[209064]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:08 compute-0 ceph-mon[75144]: pgmap v485: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:08 compute-0 sudo[209216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkeaqhoysihdudzoiupvrpyemrkpjfqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102067.8662405-1025-216416048122912/AnsiballZ_copy.py'
Nov 25 20:21:08 compute-0 sudo[209216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:08 compute-0 python3.9[209218]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:08 compute-0 sudo[209216]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:09 compute-0 sudo[209368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiqbrabavijipljnpyzfebbtvinksizc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102068.6503782-1025-101131629931981/AnsiballZ_copy.py'
Nov 25 20:21:09 compute-0 sudo[209368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v486: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:09 compute-0 python3.9[209370]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:09 compute-0 sudo[209368]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:09 compute-0 sudo[209520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqwgznirxnxeuitludbphbhookqvxmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102069.508313-1061-251523130905309/AnsiballZ_systemd.py'
Nov 25 20:21:09 compute-0 sudo[209520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:10 compute-0 ceph-mon[75144]: pgmap v486: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:10 compute-0 python3.9[209522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:21:10 compute-0 systemd[1]: Reloading.
Nov 25 20:21:10 compute-0 systemd-rc-local-generator[209551]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:10 compute-0 systemd-sysv-generator[209554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:10 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 20:21:10 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 20:21:10 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 20:21:10 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 20:21:10 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 25 20:21:10 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 25 20:21:10 compute-0 sudo[209520]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v487: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:11 compute-0 sudo[209715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scsmiyieiunauspkbchoctubsrmhboig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102071.046763-1061-161798691277090/AnsiballZ_systemd.py'
Nov 25 20:21:11 compute-0 sudo[209715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:11 compute-0 python3.9[209717]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:21:11 compute-0 systemd[1]: Reloading.
Nov 25 20:21:11 compute-0 systemd-rc-local-generator[209745]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:11 compute-0 systemd-sysv-generator[209748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:12 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 20:21:12 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 20:21:12 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 20:21:12 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 20:21:12 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 20:21:12 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 20:21:12 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 20:21:12 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 20:21:12 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 20:21:12 compute-0 ceph-mon[75144]: pgmap v487: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:12 compute-0 sudo[209715]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:12 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 20:21:12 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 20:21:12 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 20:21:12 compute-0 sudo[209939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmreslaaevvvoqacqsufcllnofpbsltn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102072.396717-1061-12651707161519/AnsiballZ_systemd.py'
Nov 25 20:21:12 compute-0 sudo[209939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:13 compute-0 python3.9[209941]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:21:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v488: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:13 compute-0 systemd[1]: Reloading.
Nov 25 20:21:13 compute-0 systemd-rc-local-generator[209966]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:13 compute-0 systemd-sysv-generator[209969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:13 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 20:21:13 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 20:21:13 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 20:21:13 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 20:21:13 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 20:21:13 compute-0 setroubleshoot[209754]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 485c52f5-7851-4125-a038-e3e407ff819e
Nov 25 20:21:13 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 20:21:13 compute-0 setroubleshoot[209754]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 20:21:13 compute-0 setroubleshoot[209754]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 485c52f5-7851-4125-a038-e3e407ff819e
Nov 25 20:21:13 compute-0 setroubleshoot[209754]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 20:21:13 compute-0 sudo[209939]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:14 compute-0 sudo[210153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njebxbjcgkeifocbokiercymmvfxvbri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102073.7703211-1061-25022586676343/AnsiballZ_systemd.py'
Nov 25 20:21:14 compute-0 sudo[210153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:14 compute-0 ceph-mon[75144]: pgmap v488: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:14 compute-0 python3.9[210155]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:21:14 compute-0 systemd[1]: Reloading.
Nov 25 20:21:14 compute-0 systemd-rc-local-generator[210184]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:14 compute-0 systemd-sysv-generator[210187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:14 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 20:21:14 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 20:21:14 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 20:21:14 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 20:21:14 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 20:21:14 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 20:21:14 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 20:21:14 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 20:21:14 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 20:21:14 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 20:21:14 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 20:21:14 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 20:21:14 compute-0 sudo[210153]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v489: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:15 compute-0 sudo[210369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiggxqgqupknezoqvbdpwdptiwpbsux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102075.167175-1061-227513062239790/AnsiballZ_systemd.py'
Nov 25 20:21:15 compute-0 sudo[210369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:15 compute-0 sudo[210372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:15 compute-0 sudo[210372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:15 compute-0 sudo[210372]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:15 compute-0 python3.9[210371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:21:15 compute-0 systemd[1]: Reloading.
Nov 25 20:21:16 compute-0 systemd-rc-local-generator[210449]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:16 compute-0 systemd-sysv-generator[210452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:16 compute-0 ceph-mon[75144]: pgmap v489: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:16 compute-0 sudo[210398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:21:16 compute-0 sudo[210398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:16 compute-0 sudo[210398]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:16 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 20:21:16 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 20:21:16 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 20:21:16 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 20:21:16 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 20:21:16 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 20:21:16 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 20:21:16 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 20:21:16 compute-0 sudo[210460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:16 compute-0 sudo[210460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:16 compute-0 sudo[210460]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:16 compute-0 sudo[210369]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:16 compute-0 sudo[210508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:21:16 compute-0 sudo[210508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:17 compute-0 sudo[210508]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:21:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:21:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:21:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:21:17 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 87116e2f-f374-428c-8646-b9a48b73241f does not exist
Nov 25 20:21:17 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6b0352db-1e5d-4cbc-9baf-9f6f6acd7fab does not exist
Nov 25 20:21:17 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 3c54f645-bed5-4489-bbef-6807cab47cec does not exist
Nov 25 20:21:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:21:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:21:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:21:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v490: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:17 compute-0 sudo[210728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjyioqiqgsrirfozqfkvgxhaiiltwdsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102076.7946842-1098-138498982366912/AnsiballZ_file.py'
Nov 25 20:21:17 compute-0 sudo[210728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:17 compute-0 sudo[210696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:17 compute-0 sudo[210696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:17 compute-0 sudo[210696]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:21:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:21:17 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:21:17 compute-0 sudo[210742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:21:17 compute-0 sudo[210742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:17 compute-0 sudo[210742]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:17 compute-0 sudo[210767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:17 compute-0 sudo[210767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:17 compute-0 sudo[210767]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:17 compute-0 python3.9[210739]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:17 compute-0 sudo[210728]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:17 compute-0 sudo[210792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:21:17 compute-0 sudo[210792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:17 compute-0 podman[210953]: 2025-11-25 20:21:17.852448317 +0000 UTC m=+0.068232810 container create 70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:21:17 compute-0 systemd[1]: Started libpod-conmon-70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797.scope.
Nov 25 20:21:17 compute-0 podman[210953]: 2025-11-25 20:21:17.8217352 +0000 UTC m=+0.037519743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:21:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:21:17 compute-0 podman[210953]: 2025-11-25 20:21:17.970868008 +0000 UTC m=+0.186652481 container init 70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:21:17 compute-0 podman[210953]: 2025-11-25 20:21:17.982736885 +0000 UTC m=+0.198521368 container start 70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:21:17 compute-0 podman[210953]: 2025-11-25 20:21:17.98642037 +0000 UTC m=+0.202204823 container attach 70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:21:17 compute-0 vigilant_neumann[210995]: 167 167
Nov 25 20:21:17 compute-0 systemd[1]: libpod-70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797.scope: Deactivated successfully.
Nov 25 20:21:17 compute-0 podman[210953]: 2025-11-25 20:21:17.992617331 +0000 UTC m=+0.208401824 container died 70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:21:18 compute-0 sudo[211025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyfdontoxptlpisydfdjmuzceyyowmyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102077.5858898-1106-87502180542473/AnsiballZ_find.py'
Nov 25 20:21:18 compute-0 sudo[211025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-592f3385f24433c9a3c28e6bc65e8f4d1c74fb467ac19d7c7b0cb49774fb5ea1-merged.mount: Deactivated successfully.
Nov 25 20:21:18 compute-0 podman[210953]: 2025-11-25 20:21:18.035089202 +0000 UTC m=+0.250873655 container remove 70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:21:18 compute-0 systemd[1]: libpod-conmon-70ede9a6eefab1649f10ae0a070d244b44b6787a661c340ee01334e6c51dc797.scope: Deactivated successfully.
Nov 25 20:21:18 compute-0 ceph-mon[75144]: pgmap v490: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:18 compute-0 podman[211048]: 2025-11-25 20:21:18.269524071 +0000 UTC m=+0.073256921 container create c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:21:18 compute-0 python3.9[211030]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:21:18 compute-0 systemd[1]: Started libpod-conmon-c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b.scope.
Nov 25 20:21:18 compute-0 sudo[211025]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:18 compute-0 podman[211048]: 2025-11-25 20:21:18.241271868 +0000 UTC m=+0.045004768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:21:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a6ef5dc3f48576f60620f27072b27224f9e3c9e6b6897871391afe08963d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a6ef5dc3f48576f60620f27072b27224f9e3c9e6b6897871391afe08963d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a6ef5dc3f48576f60620f27072b27224f9e3c9e6b6897871391afe08963d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a6ef5dc3f48576f60620f27072b27224f9e3c9e6b6897871391afe08963d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a6ef5dc3f48576f60620f27072b27224f9e3c9e6b6897871391afe08963d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:18 compute-0 podman[211048]: 2025-11-25 20:21:18.366991138 +0000 UTC m=+0.170724078 container init c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:21:18 compute-0 podman[211048]: 2025-11-25 20:21:18.381033842 +0000 UTC m=+0.184766712 container start c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_aryabhata, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:21:18 compute-0 podman[211048]: 2025-11-25 20:21:18.384457591 +0000 UTC m=+0.188190481 container attach c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:21:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v491: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:19 compute-0 sudo[211227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqrjhjfopskfxsiekaezidisjgbnpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102078.7383213-1114-68158637776726/AnsiballZ_command.py'
Nov 25 20:21:19 compute-0 sudo[211227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:19 compute-0 python3.9[211231]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:21:19 compute-0 sudo[211227]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:19 compute-0 cranky_aryabhata[211065]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:21:19 compute-0 cranky_aryabhata[211065]: --> relative data size: 1.0
Nov 25 20:21:19 compute-0 cranky_aryabhata[211065]: --> All data devices are unavailable
Nov 25 20:21:19 compute-0 systemd[1]: libpod-c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b.scope: Deactivated successfully.
Nov 25 20:21:19 compute-0 systemd[1]: libpod-c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b.scope: Consumed 1.147s CPU time.
Nov 25 20:21:19 compute-0 podman[211048]: 2025-11-25 20:21:19.600051399 +0000 UTC m=+1.403784289 container died c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_aryabhata, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6a6ef5dc3f48576f60620f27072b27224f9e3c9e6b6897871391afe08963d2-merged.mount: Deactivated successfully.
Nov 25 20:21:19 compute-0 podman[211048]: 2025-11-25 20:21:19.670659249 +0000 UTC m=+1.474392109 container remove c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:21:19 compute-0 systemd[1]: libpod-conmon-c345af4dac6d07783302f93ab3aaa15cf694e396725366754aaae513cac1e37b.scope: Deactivated successfully.
Nov 25 20:21:19 compute-0 sudo[210792]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:19 compute-0 sudo[211286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:19 compute-0 sudo[211286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:19 compute-0 sudo[211286]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:19 compute-0 sudo[211334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:21:19 compute-0 sudo[211334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:19 compute-0 sudo[211334]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:19 compute-0 sudo[211388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:19 compute-0 sudo[211388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:19 compute-0 sudo[211388]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:20 compute-0 sudo[211436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:21:20 compute-0 sudo[211436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:20 compute-0 ceph-mon[75144]: pgmap v491: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:20 compute-0 python3.9[211511]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.372179459 +0000 UTC m=+0.045841920 container create a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:21:20 compute-0 systemd[1]: Started libpod-conmon-a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527.scope.
Nov 25 20:21:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.34911062 +0000 UTC m=+0.022773111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.455257272 +0000 UTC m=+0.128919743 container init a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.464194385 +0000 UTC m=+0.137856836 container start a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_proskuriakova, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.46747816 +0000 UTC m=+0.141140621 container attach a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:21:20 compute-0 friendly_proskuriakova[211592]: 167 167
Nov 25 20:21:20 compute-0 systemd[1]: libpod-a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527.scope: Deactivated successfully.
Nov 25 20:21:20 compute-0 conmon[211592]: conmon a69757e5cc873f5e17d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527.scope/container/memory.events
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.471212426 +0000 UTC m=+0.144874877 container died a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a6fce2a162fc3ebf9084bd6b00f6e4d412e2dcac1e43cfa86b76aa6bd298e25-merged.mount: Deactivated successfully.
Nov 25 20:21:20 compute-0 podman[211557]: 2025-11-25 20:21:20.506118912 +0000 UTC m=+0.179781393 container remove a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_proskuriakova, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:21:20 compute-0 systemd[1]: libpod-conmon-a69757e5cc873f5e17d5141a77a7b876d740c999b96994adff1bd51bf1b0c527.scope: Deactivated successfully.
Nov 25 20:21:20 compute-0 podman[211665]: 2025-11-25 20:21:20.726191307 +0000 UTC m=+0.046111946 container create 5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:21:20 compute-0 systemd[1]: Started libpod-conmon-5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c.scope.
Nov 25 20:21:20 compute-0 podman[211665]: 2025-11-25 20:21:20.708184911 +0000 UTC m=+0.028105580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:21:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5205b8fa98cf7faa5eae161ed38102c344d87504f31b86e73b594b3589b7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5205b8fa98cf7faa5eae161ed38102c344d87504f31b86e73b594b3589b7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5205b8fa98cf7faa5eae161ed38102c344d87504f31b86e73b594b3589b7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5205b8fa98cf7faa5eae161ed38102c344d87504f31b86e73b594b3589b7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:20 compute-0 podman[211665]: 2025-11-25 20:21:20.840625655 +0000 UTC m=+0.160546404 container init 5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:21:20 compute-0 podman[211665]: 2025-11-25 20:21:20.854531645 +0000 UTC m=+0.174452334 container start 5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:21:20 compute-0 podman[211665]: 2025-11-25 20:21:20.85819116 +0000 UTC m=+0.178111899 container attach 5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:21:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v492: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:21 compute-0 python3.9[211763]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:21 compute-0 podman[211858]: 2025-11-25 20:21:21.59524162 +0000 UTC m=+0.069627847 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]: {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:     "0": [
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:         {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "devices": [
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "/dev/loop3"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             ],
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_name": "ceph_lv0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_size": "21470642176",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "name": "ceph_lv0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "tags": {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cluster_name": "ceph",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.crush_device_class": "",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.encrypted": "0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osd_id": "0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.type": "block",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.vdo": "0"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             },
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "type": "block",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "vg_name": "ceph_vg0"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:         }
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:     ],
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:     "1": [
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:         {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "devices": [
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "/dev/loop4"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             ],
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_name": "ceph_lv1",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_size": "21470642176",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "name": "ceph_lv1",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "tags": {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cluster_name": "ceph",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.crush_device_class": "",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.encrypted": "0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osd_id": "1",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.type": "block",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.vdo": "0"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             },
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "type": "block",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "vg_name": "ceph_vg1"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:         }
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:     ],
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:     "2": [
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:         {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "devices": [
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "/dev/loop5"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             ],
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_name": "ceph_lv2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_size": "21470642176",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "name": "ceph_lv2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "tags": {
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.cluster_name": "ceph",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.crush_device_class": "",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.encrypted": "0",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osd_id": "2",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.type": "block",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:                 "ceph.vdo": "0"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             },
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "type": "block",
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:             "vg_name": "ceph_vg2"
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:         }
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]:     ]
Nov 25 20:21:21 compute-0 dreamy_beaver[211708]: }
Nov 25 20:21:21 compute-0 systemd[1]: libpod-5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c.scope: Deactivated successfully.
Nov 25 20:21:21 compute-0 podman[211665]: 2025-11-25 20:21:21.670488131 +0000 UTC m=+0.990408810 container died 5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:21:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a5205b8fa98cf7faa5eae161ed38102c344d87504f31b86e73b594b3589b7b-merged.mount: Deactivated successfully.
Nov 25 20:21:21 compute-0 podman[211665]: 2025-11-25 20:21:21.750326691 +0000 UTC m=+1.070247370 container remove 5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:21:21 compute-0 systemd[1]: libpod-conmon-5f8dacdc82e9bda5b5b313d219d01040664a8351df4252941effb6236eb4661c.scope: Deactivated successfully.
Nov 25 20:21:21 compute-0 python3.9[211899]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102080.6112444-1133-214339964275055/.source.xml follow=False _original_basename=secret.xml.j2 checksum=3dc02cc055b82e3fc356b56190136485ef95990d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:21 compute-0 sudo[211436]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:21 compute-0 sudo[211922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:21 compute-0 sudo[211922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:21 compute-0 sudo[211922]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:21 compute-0 sudo[211971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:21:21 compute-0 sudo[211971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:21 compute-0 sudo[211971]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:22 compute-0 sudo[211999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:22 compute-0 sudo[211999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:22 compute-0 sudo[211999]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:22 compute-0 sudo[212062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:21:22 compute-0 sudo[212062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:22 compute-0 ceph-mon[75144]: pgmap v492: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:22 compute-0 sudo[212198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sokrzbyebpiniweogileitdciwjoomta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102082.008936-1148-64529064445326/AnsiballZ_command.py'
Nov 25 20:21:22 compute-0 sudo[212198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.49465925 +0000 UTC m=+0.059272468 container create a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:21:22 compute-0 systemd[1]: Started libpod-conmon-a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7.scope.
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.469345054 +0000 UTC m=+0.033958332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:21:22 compute-0 python3.9[212206]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 712dd110-763a-5547-8ef7-acda1414fdce
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.583673678 +0000 UTC m=+0.148286906 container init a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.590007702 +0000 UTC m=+0.154620890 container start a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.594116699 +0000 UTC m=+0.158729887 container attach a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:21:22 compute-0 wizardly_ritchie[212230]: 167 167
Nov 25 20:21:22 compute-0 systemd[1]: libpod-a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7.scope: Deactivated successfully.
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.598504213 +0000 UTC m=+0.163117431 container died a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:21:22 compute-0 polkitd[43533]: Registered Authentication Agent for unix-process:212234:323829 (system bus name :1.2664 [pkttyagent --process 212234 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 20:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7aa27e7e469b2219f94cdfc75a3ca0d12cae0d4030b80e3f44149b5baa62ce-merged.mount: Deactivated successfully.
Nov 25 20:21:22 compute-0 polkitd[43533]: Unregistered Authentication Agent for unix-process:212234:323829 (system bus name :1.2664, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 20:21:22 compute-0 podman[212213]: 2025-11-25 20:21:22.64044705 +0000 UTC m=+0.205060238 container remove a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:21:22 compute-0 systemd[1]: libpod-conmon-a392885ae7c104cfcf10899f61c5abd0a6bb545c64bf786c0e70c00c46a141a7.scope: Deactivated successfully.
Nov 25 20:21:22 compute-0 polkitd[43533]: Registered Authentication Agent for unix-process:212233:323828 (system bus name :1.2666 [pkttyagent --process 212233 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 20:21:22 compute-0 polkitd[43533]: Unregistered Authentication Agent for unix-process:212233:323828 (system bus name :1.2666, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 20:21:22 compute-0 sudo[212198]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:22 compute-0 podman[212288]: 2025-11-25 20:21:22.893479021 +0000 UTC m=+0.063445606 container create cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kilby, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:21:22 compute-0 systemd[1]: Started libpod-conmon-cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80.scope.
Nov 25 20:21:22 compute-0 podman[212288]: 2025-11-25 20:21:22.872899027 +0000 UTC m=+0.042865652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c213838b6cc0389f83fe687312b77acd1295f24b92045e536e33d85d8f8820/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c213838b6cc0389f83fe687312b77acd1295f24b92045e536e33d85d8f8820/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c213838b6cc0389f83fe687312b77acd1295f24b92045e536e33d85d8f8820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c213838b6cc0389f83fe687312b77acd1295f24b92045e536e33d85d8f8820/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:21:23 compute-0 podman[212288]: 2025-11-25 20:21:23.000450645 +0000 UTC m=+0.170417270 container init cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:21:23 compute-0 podman[212288]: 2025-11-25 20:21:23.013678168 +0000 UTC m=+0.183644783 container start cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kilby, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:21:23 compute-0 podman[212288]: 2025-11-25 20:21:23.018318238 +0000 UTC m=+0.188284843 container attach cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:21:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v493: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:23 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 20:21:23 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 20:21:23 compute-0 python3.9[212438]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:24 compute-0 adoring_kilby[212308]: {
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "osd_id": 2,
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "type": "bluestore"
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:     },
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "osd_id": 1,
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "type": "bluestore"
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:     },
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "osd_id": 0,
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:         "type": "bluestore"
Nov 25 20:21:24 compute-0 adoring_kilby[212308]:     }
Nov 25 20:21:24 compute-0 adoring_kilby[212308]: }
Nov 25 20:21:24 compute-0 systemd[1]: libpod-cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80.scope: Deactivated successfully.
Nov 25 20:21:24 compute-0 systemd[1]: libpod-cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80.scope: Consumed 1.056s CPU time.
Nov 25 20:21:24 compute-0 podman[212288]: 2025-11-25 20:21:24.063297782 +0000 UTC m=+1.233264377 container died cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0c213838b6cc0389f83fe687312b77acd1295f24b92045e536e33d85d8f8820-merged.mount: Deactivated successfully.
Nov 25 20:21:24 compute-0 podman[212288]: 2025-11-25 20:21:24.127906757 +0000 UTC m=+1.297873332 container remove cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:21:24 compute-0 systemd[1]: libpod-conmon-cab6dda4664dc9c6ead10a7e27f67e6fd657a16268434ea27778a4bd6c406b80.scope: Deactivated successfully.
Nov 25 20:21:24 compute-0 sudo[212062]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:21:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:21:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:21:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:21:24 compute-0 ceph-mon[75144]: pgmap v493: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:21:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:21:24 compute-0 sudo[212651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pncbgtwlrsiipklmwpyygsjulohpvxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102083.876631-1164-210204782461480/AnsiballZ_command.py'
Nov 25 20:21:24 compute-0 sudo[212651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:24 compute-0 sudo[212606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:21:24 compute-0 sudo[212606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:24 compute-0 sudo[212606]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:24 compute-0 sudo[212656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:21:24 compute-0 sudo[212656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:21:24 compute-0 sudo[212656]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:24 compute-0 sudo[212651]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v494: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:25 compute-0 sudo[212831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckrceczzbgawjihkpllvimqoargipcor ; FSID=712dd110-763a-5547-8ef7-acda1414fdce KEY=AQDXCyZpAAAAABAA6kidp+XIon3+r0gcfgtA2g== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102084.7995768-1172-246534547735152/AnsiballZ_command.py'
Nov 25 20:21:25 compute-0 sudo[212831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:25 compute-0 polkitd[43533]: Registered Authentication Agent for unix-process:212834:324104 (system bus name :1.2675 [pkttyagent --process 212834 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 20:21:25 compute-0 polkitd[43533]: Unregistered Authentication Agent for unix-process:212834:324104 (system bus name :1.2675, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 20:21:25 compute-0 sudo[212831]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:26 compute-0 sudo[212989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnfjklhgudsztazctacglwxufgasidzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102085.6847882-1180-190783097757668/AnsiballZ_copy.py'
Nov 25 20:21:26 compute-0 sudo[212989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:26 compute-0 ceph-mon[75144]: pgmap v494: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:26 compute-0 python3.9[212991]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:26 compute-0 sudo[212989]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:21:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:21:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:21:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:21:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:21:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:21:26 compute-0 sudo[213141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rguzlrwoxtsztxjajwofjxovfhaunvun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102086.5301797-1188-30637517868454/AnsiballZ_stat.py'
Nov 25 20:21:26 compute-0 sudo[213141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v495: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:27 compute-0 python3.9[213143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:27 compute-0 sudo[213141]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:27 compute-0 sudo[213264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sraiqlalznirdhlsgulbwmkblzihcfuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102086.5301797-1188-30637517868454/AnsiballZ_copy.py'
Nov 25 20:21:27 compute-0 sudo[213264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:27 compute-0 python3.9[213266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102086.5301797-1188-30637517868454/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:27 compute-0 sudo[213264]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:28 compute-0 ceph-mon[75144]: pgmap v495: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:28 compute-0 sudo[213416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkosdzuxqthlazkmcfykoyhwlxmacsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102088.109498-1204-25690081334365/AnsiballZ_file.py'
Nov 25 20:21:28 compute-0 sudo[213416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:28 compute-0 python3.9[213418]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:28 compute-0 sudo[213416]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v496: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:29 compute-0 sudo[213568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mykroagattntifotvkqfgjuhdbiuhbrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102088.960067-1212-168220633139121/AnsiballZ_stat.py'
Nov 25 20:21:29 compute-0 sudo[213568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:29 compute-0 python3.9[213570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:29 compute-0 sudo[213568]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:29 compute-0 sudo[213646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prbiwgdugikwxkcvuiitvjhojjpsoowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102088.960067-1212-168220633139121/AnsiballZ_file.py'
Nov 25 20:21:29 compute-0 sudo[213646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:30 compute-0 python3.9[213648]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:30 compute-0 sudo[213646]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:30 compute-0 ceph-mon[75144]: pgmap v496: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:30 compute-0 sudo[213798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqlkxvwllwhdoycoibnjnhvmjvdupsqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102090.256742-1224-9115665452068/AnsiballZ_stat.py'
Nov 25 20:21:30 compute-0 sudo[213798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:31 compute-0 python3.9[213800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:31 compute-0 sudo[213798]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v497: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:31 compute-0 sudo[213876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlrjcluwrwucvaglzgohuufwfghfhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102090.256742-1224-9115665452068/AnsiballZ_file.py'
Nov 25 20:21:31 compute-0 sudo[213876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:31 compute-0 python3.9[213878]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.78zbh5lk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:31 compute-0 sudo[213876]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:31 compute-0 ceph-mon[75144]: pgmap v497: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:32 compute-0 sudo[214044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxakxxsvvpdiugggqlapkgnqudttxez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102091.7006729-1236-223867183196002/AnsiballZ_stat.py'
Nov 25 20:21:32 compute-0 sudo[214044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:32 compute-0 podman[214002]: 2025-11-25 20:21:32.08042157 +0000 UTC m=+0.086847023 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 20:21:32 compute-0 python3.9[214049]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:32 compute-0 sudo[214044]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:32 compute-0 sudo[214131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezslczmihymfnustkjlmtvyhflizjwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102091.7006729-1236-223867183196002/AnsiballZ_file.py'
Nov 25 20:21:32 compute-0 sudo[214131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:32 compute-0 python3.9[214133]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:32 compute-0 sudo[214131]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v498: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:33 compute-0 sudo[214283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfbedyxzodpodjroofwabouotghngim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102093.036801-1249-191094228508986/AnsiballZ_command.py'
Nov 25 20:21:33 compute-0 sudo[214283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:33 compute-0 python3.9[214285]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:21:33 compute-0 sudo[214283]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:34 compute-0 ceph-mon[75144]: pgmap v498: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:34 compute-0 sudo[214436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxkyzxiwmcqztcmpivefzeodqcahajed ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764102093.8817356-1257-129760813191190/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 20:21:34 compute-0 sudo[214436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:34 compute-0 python3[214438]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:21:34 compute-0 sudo[214436]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v499: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:35 compute-0 sudo[214588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkinjusgswdpefuplggpomzumwdlgdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102094.8042576-1265-202897685017797/AnsiballZ_stat.py'
Nov 25 20:21:35 compute-0 sudo[214588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:35 compute-0 python3.9[214590]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:35 compute-0 sudo[214588]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:35 compute-0 sudo[214666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njvigvypxbrijumuxsbhxqyliajxlrur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102094.8042576-1265-202897685017797/AnsiballZ_file.py'
Nov 25 20:21:35 compute-0 sudo[214666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:35 compute-0 python3.9[214668]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:35 compute-0 sudo[214666]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:36 compute-0 ceph-mon[75144]: pgmap v499: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:36 compute-0 sudo[214818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvikojvfjkckucyclwwnqxaxvvamdvim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102096.2011921-1277-134398558912599/AnsiballZ_stat.py'
Nov 25 20:21:36 compute-0 sudo[214818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:36 compute-0 python3.9[214820]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:36 compute-0 sudo[214818]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v500: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:37 compute-0 sudo[214896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgolzkryjxlhlrigbvnztqucgwpcejyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102096.2011921-1277-134398558912599/AnsiballZ_file.py'
Nov 25 20:21:37 compute-0 sudo[214896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:37 compute-0 python3.9[214898]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:37 compute-0 sudo[214896]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:38 compute-0 sudo[215048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aanulhnjhmrxyhayxhwdjqqwicnlnymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102097.691897-1289-7382974840477/AnsiballZ_stat.py'
Nov 25 20:21:38 compute-0 sudo[215048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:38 compute-0 ceph-mon[75144]: pgmap v500: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:38 compute-0 python3.9[215050]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:38 compute-0 sudo[215048]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:38 compute-0 sudo[215126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foyfhmkpkfnbphwweihxsjzetadqafuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102097.691897-1289-7382974840477/AnsiballZ_file.py'
Nov 25 20:21:38 compute-0 sudo[215126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:38 compute-0 python3.9[215128]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:38 compute-0 sudo[215126]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v501: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:39 compute-0 sudo[215278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dytumewojccxfdiaztxzavmhjlubtghd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102099.123369-1301-51418071159173/AnsiballZ_stat.py'
Nov 25 20:21:39 compute-0 sudo[215278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:39 compute-0 python3.9[215280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:39 compute-0 sudo[215278]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:40 compute-0 sudo[215356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfdxuqwyooxnokcjibpjmzkedlqgnrgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102099.123369-1301-51418071159173/AnsiballZ_file.py'
Nov 25 20:21:40 compute-0 sudo[215356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:40 compute-0 ceph-mon[75144]: pgmap v501: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:40 compute-0 python3.9[215358]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:40 compute-0 sudo[215356]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:41 compute-0 sudo[215508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqpeqgavstsdomadldfigavsbkvldsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102100.5603976-1313-60679304557886/AnsiballZ_stat.py'
Nov 25 20:21:41 compute-0 sudo[215508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v502: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:41 compute-0 python3.9[215510]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:41 compute-0 sudo[215508]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:41 compute-0 sudo[215633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmnysumwugunfvcicyoondjdgelgedgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102100.5603976-1313-60679304557886/AnsiballZ_copy.py'
Nov 25 20:21:41 compute-0 sudo[215633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:41 compute-0 python3.9[215635]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764102100.5603976-1313-60679304557886/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:41 compute-0 sudo[215633]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:42 compute-0 ceph-mon[75144]: pgmap v502: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:42 compute-0 sudo[215785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowflpvaxugdifooadxuzywxjdofqhma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102102.1902907-1328-67728846437416/AnsiballZ_file.py'
Nov 25 20:21:42 compute-0 sudo[215785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:42 compute-0 python3.9[215787]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:42 compute-0 sudo[215785]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v503: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:43 compute-0 sudo[215937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sskmvbmklvvfkjzmtrloxntispanhqnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102103.0977445-1336-119171106887953/AnsiballZ_command.py'
Nov 25 20:21:43 compute-0 sudo[215937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:43 compute-0 python3.9[215939]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:21:43 compute-0 sudo[215937]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:44 compute-0 ceph-mon[75144]: pgmap v503: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.214705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102104214736, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 2309540, "memory_usage": 2347472, "flush_reason": "Manual Compaction"}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102104229308, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2243120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8868, "largest_seqno": 10905, "table_properties": {"data_size": 2233987, "index_size": 5755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17726, "raw_average_key_size": 19, "raw_value_size": 2215716, "raw_average_value_size": 2426, "num_data_blocks": 265, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101872, "oldest_key_time": 1764101872, "file_creation_time": 1764102104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 14672 microseconds, and 5957 cpu microseconds.
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.229371) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2243120 bytes OK
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.229397) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.231431) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.231453) EVENT_LOG_v1 {"time_micros": 1764102104231446, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.231476) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2301047, prev total WAL file size 2301047, number of live WAL files 2.
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.233018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2190KB)], [26(4453KB)]
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102104233114, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 6803999, "oldest_snapshot_seqno": -1}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3112 keys, 5710239 bytes, temperature: kUnknown
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102104277783, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 5710239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5685342, "index_size": 16020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 71619, "raw_average_key_size": 23, "raw_value_size": 5625627, "raw_average_value_size": 1807, "num_data_blocks": 710, "num_entries": 3112, "num_filter_entries": 3112, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.278057) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 5710239 bytes
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.279392) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.0 rd, 127.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 4.3 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 3626, records dropped: 514 output_compression: NoCompression
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.279413) EVENT_LOG_v1 {"time_micros": 1764102104279403, "job": 10, "event": "compaction_finished", "compaction_time_micros": 44759, "compaction_time_cpu_micros": 28064, "output_level": 6, "num_output_files": 1, "total_output_size": 5710239, "num_input_records": 3626, "num_output_records": 3112, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102104279971, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102104280857, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.232762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.280981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.280991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.280995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.280998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:21:44 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:21:44.281002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:21:44 compute-0 sudo[216092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtynxkllxszqbapyszggnzkreyewwimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102103.9712574-1344-123754463012390/AnsiballZ_blockinfile.py'
Nov 25 20:21:44 compute-0 sudo[216092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:44 compute-0 python3.9[216094]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:44 compute-0 sudo[216092]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v504: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:45 compute-0 sudo[216244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkwypilfftlxyakhbqpsumexrbmzrqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102105.1061852-1353-245349671043944/AnsiballZ_command.py'
Nov 25 20:21:45 compute-0 sudo[216244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:45 compute-0 python3.9[216246]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:21:45 compute-0 sudo[216244]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:46 compute-0 ceph-mon[75144]: pgmap v504: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:46 compute-0 sudo[216397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iotnbsxbbraqnbbnnjrdubicmvabqlkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102105.9261951-1361-110236859913760/AnsiballZ_stat.py'
Nov 25 20:21:46 compute-0 sudo[216397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:46 compute-0 python3.9[216399]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:21:46 compute-0 sudo[216397]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v505: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:47 compute-0 sudo[216551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uatojfsngkoahnhwflfoueavnpxxecag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102106.7847745-1369-21853691222481/AnsiballZ_command.py'
Nov 25 20:21:47 compute-0 sudo[216551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:47 compute-0 python3.9[216553]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:21:47 compute-0 sudo[216551]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:48 compute-0 sudo[216706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtfoedjrtbrsxnqdvsfsnwbezvzuvcbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102107.6841154-1377-187940062843528/AnsiballZ_file.py'
Nov 25 20:21:48 compute-0 sudo[216706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:48 compute-0 ceph-mon[75144]: pgmap v505: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:48 compute-0 python3.9[216708]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:48 compute-0 sudo[216706]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:21:48.938 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:21:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:21:48.940 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:21:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:21:48.940 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:21:48 compute-0 sudo[216858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhzreabjiuibbnoqlrrmywrwyfabvnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102108.5704613-1385-172961032626053/AnsiballZ_stat.py'
Nov 25 20:21:48 compute-0 sudo[216858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v506: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:49 compute-0 python3.9[216860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:49 compute-0 sudo[216858]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:49 compute-0 sudo[216981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjbjwotaevbzvahjduugyjrtuxjugoke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102108.5704613-1385-172961032626053/AnsiballZ_copy.py'
Nov 25 20:21:49 compute-0 sudo[216981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:49 compute-0 python3.9[216983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102108.5704613-1385-172961032626053/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:49 compute-0 sudo[216981]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:50 compute-0 ceph-mon[75144]: pgmap v506: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:50 compute-0 sshd-session[217008]: Connection closed by 103.236.94.4 port 60736
Nov 25 20:21:50 compute-0 sudo[217134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwwyfwviullxctxpgttlwzhujjlnpwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102110.1859894-1400-126722738731529/AnsiballZ_stat.py'
Nov 25 20:21:50 compute-0 sudo[217134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:50 compute-0 python3.9[217136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:50 compute-0 sudo[217134]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v507: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:51 compute-0 sudo[217257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwckzwsihkbaniprhdnhcnanmwrmcehf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102110.1859894-1400-126722738731529/AnsiballZ_copy.py'
Nov 25 20:21:51 compute-0 sudo[217257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:51 compute-0 python3.9[217259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102110.1859894-1400-126722738731529/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:51 compute-0 sudo[217257]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:51 compute-0 podman[217359]: 2025-11-25 20:21:51.98542861 +0000 UTC m=+0.064363842 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:21:52 compute-0 sudo[217426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiodktntysskqrcbnftntvyudbukvtho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102111.7061703-1415-17753297736195/AnsiballZ_stat.py'
Nov 25 20:21:52 compute-0 sudo[217426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:52 compute-0 ceph-mon[75144]: pgmap v507: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:52 compute-0 python3.9[217428]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:21:52 compute-0 sudo[217426]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:52 compute-0 sudo[217549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhovkxyxwpzyowiuxcqczodtxdlhpbmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102111.7061703-1415-17753297736195/AnsiballZ_copy.py'
Nov 25 20:21:52 compute-0 sudo[217549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:52 compute-0 python3.9[217551]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102111.7061703-1415-17753297736195/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:21:53 compute-0 sudo[217549]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v508: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:53 compute-0 sudo[217701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgkeduzyewlsclnrezlblsfustnpssas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102113.2629137-1430-76647972320930/AnsiballZ_systemd.py'
Nov 25 20:21:53 compute-0 sudo[217701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:53 compute-0 python3.9[217703]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:21:53 compute-0 systemd[1]: Reloading.
Nov 25 20:21:54 compute-0 systemd-rc-local-generator[217726]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:54 compute-0 systemd-sysv-generator[217733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:54 compute-0 ceph-mon[75144]: pgmap v508: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:54 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 20:21:54 compute-0 sudo[217701]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:21:55 compute-0 sudo[217893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtmybbsoflhwlhbyygjiytxgtkabkean ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102114.6831586-1438-154500507851536/AnsiballZ_systemd.py'
Nov 25 20:21:55 compute-0 sudo[217893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:21:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v509: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:55 compute-0 python3.9[217895]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 20:21:55 compute-0 systemd[1]: Reloading.
Nov 25 20:21:55 compute-0 systemd-sysv-generator[217924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:55 compute-0 systemd-rc-local-generator[217921]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:55 compute-0 systemd[1]: Reloading.
Nov 25 20:21:55 compute-0 systemd-sysv-generator[217967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:21:55 compute-0 systemd-rc-local-generator[217963]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:21:56 compute-0 sudo[217893]: pam_unix(sudo:session): session closed for user root
Nov 25 20:21:56 compute-0 ceph-mon[75144]: pgmap v509: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:56 compute-0 sshd-session[158172]: Connection closed by 192.168.122.30 port 42606
Nov 25 20:21:56 compute-0 sshd-session[158169]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:21:56 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 20:21:56 compute-0 systemd[1]: session-49.scope: Consumed 4min 8.988s CPU time.
Nov 25 20:21:56 compute-0 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Nov 25 20:21:56 compute-0 systemd-logind[789]: Removed session 49.
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:21:56
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes']
Nov 25 20:21:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:21:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v510: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:58 compute-0 ceph-mon[75144]: pgmap v510: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:21:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v511: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:00 compute-0 ceph-mon[75144]: pgmap v511: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v512: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:01 compute-0 sshd-session[217995]: Accepted publickey for zuul from 192.168.122.30 port 55912 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:22:01 compute-0 systemd-logind[789]: New session 50 of user zuul.
Nov 25 20:22:01 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 25 20:22:01 compute-0 sshd-session[217995]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:22:02 compute-0 anacron[51839]: Job `cron.daily' started
Nov 25 20:22:02 compute-0 anacron[51839]: Job `cron.daily' terminated
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:22:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:22:02 compute-0 ceph-mon[75144]: pgmap v512: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:02 compute-0 podman[218124]: 2025-11-25 20:22:02.863125057 +0000 UTC m=+0.137887383 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 20:22:03 compute-0 python3.9[218161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:22:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v513: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:04 compute-0 ceph-mon[75144]: pgmap v513: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:04 compute-0 python3.9[218328]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:22:04 compute-0 network[218345]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:22:04 compute-0 network[218346]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:22:04 compute-0 network[218347]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:22:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v514: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:05 compute-0 ceph-mon[75144]: pgmap v514: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v515: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:08 compute-0 ceph-mon[75144]: pgmap v515: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v516: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:10 compute-0 ceph-mon[75144]: pgmap v516: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v517: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:11 compute-0 sudo[218617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yquitvsqejtlbjjfpfblipproydeeuzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102131.006501-47-81723937431537/AnsiballZ_setup.py'
Nov 25 20:22:11 compute-0 sudo[218617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:11 compute-0 python3.9[218619]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:22:12 compute-0 sudo[218617]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:12 compute-0 ceph-mon[75144]: pgmap v517: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:12 compute-0 sudo[218701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imgjpwkhsvroedxwbflwxvnvciwoomma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102131.006501-47-81723937431537/AnsiballZ_dnf.py'
Nov 25 20:22:12 compute-0 sudo[218701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:12 compute-0 python3.9[218703]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:22:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v518: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:14 compute-0 ceph-mon[75144]: pgmap v518: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v519: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:16 compute-0 ceph-mon[75144]: pgmap v519: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v520: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:18 compute-0 sudo[218701]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:18 compute-0 ceph-mon[75144]: pgmap v520: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:18 compute-0 sudo[218854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwmavyusxbxfudkrerlktwdjicayrhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102138.4450119-59-47520369310592/AnsiballZ_stat.py'
Nov 25 20:22:18 compute-0 sudo[218854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:19 compute-0 python3.9[218856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:22:19 compute-0 sudo[218854]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v521: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:19 compute-0 sudo[219006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-selbxrozzpphnbomgdepqykzfqcmgdvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102139.3687522-69-128278403336442/AnsiballZ_command.py'
Nov 25 20:22:19 compute-0 sudo[219006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:20 compute-0 python3.9[219008]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:22:20 compute-0 sudo[219006]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:20 compute-0 ceph-mon[75144]: pgmap v521: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:20 compute-0 sudo[219159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjromkatfpkdrnidtulfymeluffaihpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102140.3592937-79-1712597884859/AnsiballZ_stat.py'
Nov 25 20:22:20 compute-0 sudo[219159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:20 compute-0 python3.9[219161]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:22:20 compute-0 sudo[219159]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v522: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:21 compute-0 ceph-mon[75144]: pgmap v522: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:21 compute-0 sudo[219311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjheeuuoyyyulcgajexjzyolerlkfts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102141.0302851-87-33834875255648/AnsiballZ_command.py'
Nov 25 20:22:21 compute-0 sudo[219311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:21 compute-0 python3.9[219313]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:22:21 compute-0 sudo[219311]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:22 compute-0 podman[219438]: 2025-11-25 20:22:22.223198405 +0000 UTC m=+0.077937315 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 20:22:22 compute-0 sudo[219485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjmovpexdhnmvcbydeywmzuwbkhztjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102141.852011-95-60441654406267/AnsiballZ_stat.py'
Nov 25 20:22:22 compute-0 sudo[219485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:22 compute-0 python3.9[219489]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:22 compute-0 sudo[219485]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:23 compute-0 sudo[219610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moyklffphwhfiayjevsjtjsauojxokjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102141.852011-95-60441654406267/AnsiballZ_copy.py'
Nov 25 20:22:23 compute-0 sudo[219610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v523: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:23 compute-0 python3.9[219612]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102141.852011-95-60441654406267/.source.iscsi _original_basename=.f2s87vvr follow=False checksum=58d2fd5f62f06eb3174e27ff761163899e241363 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:23 compute-0 sudo[219610]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:23 compute-0 sudo[219762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcxedpxjcczlbuxhmbydznlpvphhlctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102143.4484985-110-265930544508947/AnsiballZ_file.py'
Nov 25 20:22:23 compute-0 sudo[219762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:24 compute-0 python3.9[219764]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:24 compute-0 sudo[219762]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:24 compute-0 ceph-mon[75144]: pgmap v523: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:24 compute-0 sudo[219817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:24 compute-0 sudo[219817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:24 compute-0 sudo[219817]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:24 compute-0 sudo[219866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:22:24 compute-0 sudo[219866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:24 compute-0 sudo[219866]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:24 compute-0 sudo[219891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:24 compute-0 sudo[219891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:24 compute-0 sudo[219891]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:24 compute-0 sudo[219939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:22:24 compute-0 sudo[219939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:24 compute-0 sudo[220014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-purimiwwgqjotgjkbspptordmlgivsut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102144.355152-118-169082260294133/AnsiballZ_lineinfile.py'
Nov 25 20:22:24 compute-0 sudo[220014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:25 compute-0 python3.9[220016]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:25 compute-0 sudo[220014]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v524: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:25 compute-0 sudo[219939]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:22:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:22:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:22:25 compute-0 ceph-mon[75144]: pgmap v524: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:22:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:22:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:22:25 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6ca98aaa-22fb-46ad-8a11-4fe3c7119a56 does not exist
Nov 25 20:22:25 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev f9fbccd7-9e4c-401d-aaf1-324c55e8ef82 does not exist
Nov 25 20:22:25 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev ec38aa15-159b-4462-8d51-5fbef2fdc589 does not exist
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:22:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:22:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:22:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:22:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:22:25 compute-0 sudo[220124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:25 compute-0 sudo[220124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:25 compute-0 sudo[220124]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:25 compute-0 sudo[220149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:22:25 compute-0 sudo[220149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:25 compute-0 sudo[220149]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:25 compute-0 sudo[220174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:25 compute-0 sudo[220174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:25 compute-0 sudo[220174]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:25 compute-0 sudo[220199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:22:25 compute-0 sudo[220199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:26 compute-0 sudo[220321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbkjlyjkpaeiexhcomhzndddrvnzejbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102145.379296-127-187663643933713/AnsiballZ_systemd_service.py'
Nov 25 20:22:26 compute-0 sudo[220321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.270658762 +0000 UTC m=+0.118712782 container create 63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.180509997 +0000 UTC m=+0.028564037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:22:26 compute-0 python3.9[220323]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:22:26 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 20:22:26 compute-0 systemd[1]: Started libpod-conmon-63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77.scope.
Nov 25 20:22:26 compute-0 sudo[220321]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:22:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:22:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:22:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:22:26 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.574155599 +0000 UTC m=+0.422209649 container init 63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.587441496 +0000 UTC m=+0.435495516 container start 63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:22:26 compute-0 infallible_torvalds[220359]: 167 167
Nov 25 20:22:26 compute-0 systemd[1]: libpod-63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77.scope: Deactivated successfully.
Nov 25 20:22:26 compute-0 conmon[220359]: conmon 63ce1f8e5cfcc5c45bf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77.scope/container/memory.events
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.637547975 +0000 UTC m=+0.485601995 container attach 63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.63928616 +0000 UTC m=+0.487340190 container died 63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d854e46632eab9fc0062c7a6c1369e6ea9314c15c0ef0fb71a28da9444c37e99-merged.mount: Deactivated successfully.
Nov 25 20:22:26 compute-0 podman[220338]: 2025-11-25 20:22:26.72848708 +0000 UTC m=+0.576541110 container remove 63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:22:26 compute-0 systemd[1]: libpod-conmon-63ce1f8e5cfcc5c45bf5f14e6d2287c5b2e154700e8630ee9bf7aa2c455cde77.scope: Deactivated successfully.
Nov 25 20:22:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:22:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:22:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:22:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:22:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:22:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:22:26 compute-0 podman[220481]: 2025-11-25 20:22:26.926737618 +0000 UTC m=+0.036977497 container create 81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:22:26 compute-0 systemd[1]: Started libpod-conmon-81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9.scope.
Nov 25 20:22:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121f7fd6ce3f130897b7b7c3b7c8e83dddc812c22d6145da5dac72dc5f5081a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121f7fd6ce3f130897b7b7c3b7c8e83dddc812c22d6145da5dac72dc5f5081a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121f7fd6ce3f130897b7b7c3b7c8e83dddc812c22d6145da5dac72dc5f5081a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121f7fd6ce3f130897b7b7c3b7c8e83dddc812c22d6145da5dac72dc5f5081a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121f7fd6ce3f130897b7b7c3b7c8e83dddc812c22d6145da5dac72dc5f5081a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:27 compute-0 podman[220481]: 2025-11-25 20:22:26.999787277 +0000 UTC m=+0.110027246 container init 81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:22:27 compute-0 podman[220481]: 2025-11-25 20:22:26.909384395 +0000 UTC m=+0.019624284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:22:27 compute-0 podman[220481]: 2025-11-25 20:22:27.006915012 +0000 UTC m=+0.117154921 container start 81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:22:27 compute-0 podman[220481]: 2025-11-25 20:22:27.01067086 +0000 UTC m=+0.120910739 container attach 81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:22:27 compute-0 sudo[220551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ultfgnqjlnpssbvxalvthkkatqjjejfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102146.664174-135-197154595704507/AnsiballZ_systemd_service.py'
Nov 25 20:22:27 compute-0 sudo[220551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v525: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:27 compute-0 python3.9[220555]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:22:27 compute-0 systemd[1]: Reloading.
Nov 25 20:22:27 compute-0 systemd-rc-local-generator[220578]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:22:27 compute-0 systemd-sysv-generator[220582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:22:27 compute-0 ceph-mon[75144]: pgmap v525: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:27 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 20:22:27 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 20:22:27 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 20:22:27 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 20:22:27 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 20:22:27 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 20:22:27 compute-0 sudo[220551]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:28 compute-0 funny_black[220522]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:22:28 compute-0 funny_black[220522]: --> relative data size: 1.0
Nov 25 20:22:28 compute-0 funny_black[220522]: --> All data devices are unavailable
Nov 25 20:22:28 compute-0 systemd[1]: libpod-81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9.scope: Deactivated successfully.
Nov 25 20:22:28 compute-0 systemd[1]: libpod-81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9.scope: Consumed 1.137s CPU time.
Nov 25 20:22:28 compute-0 podman[220481]: 2025-11-25 20:22:28.202918551 +0000 UTC m=+1.313158470 container died 81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4121f7fd6ce3f130897b7b7c3b7c8e83dddc812c22d6145da5dac72dc5f5081a-merged.mount: Deactivated successfully.
Nov 25 20:22:28 compute-0 podman[220481]: 2025-11-25 20:22:28.262758854 +0000 UTC m=+1.372998733 container remove 81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:22:28 compute-0 systemd[1]: libpod-conmon-81c72f4547f4d735ed9ad0a9bbb43eaf3933962016daa27cde90d3ddb7eb0ae9.scope: Deactivated successfully.
Nov 25 20:22:28 compute-0 sudo[220199]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:28 compute-0 sudo[220692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:28 compute-0 sudo[220692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:28 compute-0 sudo[220692]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:28 compute-0 sudo[220748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:22:28 compute-0 sudo[220748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:28 compute-0 sudo[220748]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:28 compute-0 sudo[220790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:28 compute-0 sudo[220790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:28 compute-0 sudo[220790]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:28 compute-0 sudo[220839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:22:28 compute-0 sudo[220839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:28 compute-0 sudo[220889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hazxmkcleshijbifksntulnzrqbzpgie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102148.274671-146-9602427072290/AnsiballZ_service_facts.py'
Nov 25 20:22:28 compute-0 sudo[220889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:28 compute-0 python3.9[220892]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:22:28 compute-0 network[220941]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:22:28 compute-0 network[220947]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:22:28 compute-0 network[220948]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:22:28 compute-0 podman[220954]: 2025-11-25 20:22:28.970669674 +0000 UTC m=+0.065403199 container create e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:22:29 compute-0 podman[220954]: 2025-11-25 20:22:28.943535795 +0000 UTC m=+0.038269380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:22:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v526: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:29 compute-0 systemd[1]: Started libpod-conmon-e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37.scope.
Nov 25 20:22:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:22:29 compute-0 podman[220954]: 2025-11-25 20:22:29.764683143 +0000 UTC m=+0.859416638 container init e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 20:22:29 compute-0 podman[220954]: 2025-11-25 20:22:29.779005037 +0000 UTC m=+0.873738572 container start e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:29 compute-0 podman[220954]: 2025-11-25 20:22:29.785466966 +0000 UTC m=+0.880200571 container attach e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:22:29 compute-0 fervent_gates[220971]: 167 167
Nov 25 20:22:29 compute-0 systemd[1]: libpod-e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37.scope: Deactivated successfully.
Nov 25 20:22:29 compute-0 podman[220954]: 2025-11-25 20:22:29.788611098 +0000 UTC m=+0.883344633 container died e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd49d3accd1350997fd6613369cc66e6ba457c2271b622a0b555a2c396c35d36-merged.mount: Deactivated successfully.
Nov 25 20:22:29 compute-0 podman[220954]: 2025-11-25 20:22:29.867208161 +0000 UTC m=+0.961941666 container remove e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:22:29 compute-0 systemd[1]: libpod-conmon-e3a65a826402c5eee1248b8d563505e6f500e7fee43245e6120db28f460f9b37.scope: Deactivated successfully.
Nov 25 20:22:30 compute-0 podman[221009]: 2025-11-25 20:22:30.072283347 +0000 UTC m=+0.046725681 container create 9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:22:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:30 compute-0 systemd[1]: Started libpod-conmon-9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a.scope.
Nov 25 20:22:30 compute-0 podman[221009]: 2025-11-25 20:22:30.051550516 +0000 UTC m=+0.025992930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:22:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701f9ae1ff54f779ee5a7d44e8018df4d3bd86486d74ac6b7f0350a01386b48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701f9ae1ff54f779ee5a7d44e8018df4d3bd86486d74ac6b7f0350a01386b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701f9ae1ff54f779ee5a7d44e8018df4d3bd86486d74ac6b7f0350a01386b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701f9ae1ff54f779ee5a7d44e8018df4d3bd86486d74ac6b7f0350a01386b48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:30 compute-0 podman[221009]: 2025-11-25 20:22:30.171658423 +0000 UTC m=+0.146100767 container init 9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:22:30 compute-0 podman[221009]: 2025-11-25 20:22:30.180251518 +0000 UTC m=+0.154693902 container start 9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:30 compute-0 podman[221009]: 2025-11-25 20:22:30.18572306 +0000 UTC m=+0.160165414 container attach 9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:30 compute-0 ceph-mon[75144]: pgmap v526: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v527: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]: {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:     "0": [
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:         {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "devices": [
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "/dev/loop3"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             ],
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_name": "ceph_lv0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_size": "21470642176",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "name": "ceph_lv0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "tags": {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cluster_name": "ceph",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.crush_device_class": "",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.encrypted": "0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osd_id": "0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.type": "block",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.vdo": "0"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             },
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "type": "block",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "vg_name": "ceph_vg0"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:         }
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:     ],
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:     "1": [
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:         {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "devices": [
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "/dev/loop4"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             ],
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_name": "ceph_lv1",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_size": "21470642176",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "name": "ceph_lv1",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "tags": {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cluster_name": "ceph",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.crush_device_class": "",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.encrypted": "0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osd_id": "1",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.type": "block",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.vdo": "0"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             },
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "type": "block",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "vg_name": "ceph_vg1"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:         }
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:     ],
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:     "2": [
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:         {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "devices": [
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "/dev/loop5"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             ],
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_name": "ceph_lv2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_size": "21470642176",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "name": "ceph_lv2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "tags": {
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.cluster_name": "ceph",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.crush_device_class": "",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.encrypted": "0",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osd_id": "2",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.type": "block",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:                 "ceph.vdo": "0"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             },
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "type": "block",
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:             "vg_name": "ceph_vg2"
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:         }
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]:     ]
Nov 25 20:22:31 compute-0 friendly_wilbur[221026]: }
Nov 25 20:22:31 compute-0 systemd[1]: libpod-9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a.scope: Deactivated successfully.
Nov 25 20:22:31 compute-0 podman[221009]: 2025-11-25 20:22:31.493027246 +0000 UTC m=+1.467469590 container died 9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:31 compute-0 systemd[1]: libpod-9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a.scope: Consumed 1.258s CPU time.
Nov 25 20:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c701f9ae1ff54f779ee5a7d44e8018df4d3bd86486d74ac6b7f0350a01386b48-merged.mount: Deactivated successfully.
Nov 25 20:22:31 compute-0 podman[221009]: 2025-11-25 20:22:31.5586517 +0000 UTC m=+1.533094054 container remove 9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:22:31 compute-0 systemd[1]: libpod-conmon-9e91943c0189cbc16084144a22f05f588b337088f80ad0554a70bd1b445a6a4a.scope: Deactivated successfully.
Nov 25 20:22:31 compute-0 sudo[220839]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:31 compute-0 sudo[221063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:31 compute-0 sudo[221063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:31 compute-0 sudo[221063]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:31 compute-0 sudo[221093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:22:31 compute-0 sudo[221093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:31 compute-0 sudo[221093]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:31 compute-0 sudo[221121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:31 compute-0 sudo[221121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:31 compute-0 sudo[221121]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:31 compute-0 sudo[221148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:22:31 compute-0 sudo[221148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:32 compute-0 ceph-mon[75144]: pgmap v527: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.247739389 +0000 UTC m=+0.036971607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.349014754 +0000 UTC m=+0.138246992 container create c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:22:32 compute-0 systemd[1]: Started libpod-conmon-c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e.scope.
Nov 25 20:22:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.558969187 +0000 UTC m=+0.348201425 container init c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.567322206 +0000 UTC m=+0.356554424 container start c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:22:32 compute-0 systemd[1]: libpod-c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e.scope: Deactivated successfully.
Nov 25 20:22:32 compute-0 modest_brown[221242]: 167 167
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.634141011 +0000 UTC m=+0.423373329 container attach c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.634753187 +0000 UTC m=+0.423985455 container died c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b146c63b1a907a427a48cb99a2b3367013272cd8d2c16f8d5b738b1e0ad5550-merged.mount: Deactivated successfully.
Nov 25 20:22:32 compute-0 podman[221226]: 2025-11-25 20:22:32.70035178 +0000 UTC m=+0.489583988 container remove c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:32 compute-0 systemd[1]: libpod-conmon-c6aaa1a5a29edfe2c1b441d178182e673c939901d1a508869812a59fc809c65e.scope: Deactivated successfully.
Nov 25 20:22:32 compute-0 podman[221265]: 2025-11-25 20:22:32.933178772 +0000 UTC m=+0.061342243 container create ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:32 compute-0 systemd[1]: Started libpod-conmon-ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4.scope.
Nov 25 20:22:32 compute-0 podman[221265]: 2025-11-25 20:22:32.903970058 +0000 UTC m=+0.032133629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:22:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a27e1ab31878d11ce6c7703ae88b016af7aa61f3570da1a3e7edb77506f66f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a27e1ab31878d11ce6c7703ae88b016af7aa61f3570da1a3e7edb77506f66f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a27e1ab31878d11ce6c7703ae88b016af7aa61f3570da1a3e7edb77506f66f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a27e1ab31878d11ce6c7703ae88b016af7aa61f3570da1a3e7edb77506f66f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:22:33 compute-0 podman[221265]: 2025-11-25 20:22:33.030899913 +0000 UTC m=+0.159063414 container init ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:22:33 compute-0 podman[221265]: 2025-11-25 20:22:33.04030595 +0000 UTC m=+0.168469421 container start ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:22:33 compute-0 podman[221265]: 2025-11-25 20:22:33.045744632 +0000 UTC m=+0.173908143 container attach ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pare, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:22:33 compute-0 podman[221279]: 2025-11-25 20:22:33.085850049 +0000 UTC m=+0.106603975 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 20:22:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v528: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:33 compute-0 affectionate_pare[221283]: {
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "osd_id": 2,
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "type": "bluestore"
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:     },
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "osd_id": 1,
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "type": "bluestore"
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:     },
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "osd_id": 0,
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:         "type": "bluestore"
Nov 25 20:22:33 compute-0 affectionate_pare[221283]:     }
Nov 25 20:22:33 compute-0 affectionate_pare[221283]: }
Nov 25 20:22:33 compute-0 systemd[1]: libpod-ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4.scope: Deactivated successfully.
Nov 25 20:22:33 compute-0 podman[221265]: 2025-11-25 20:22:33.983731731 +0000 UTC m=+1.111895202 container died ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a27e1ab31878d11ce6c7703ae88b016af7aa61f3570da1a3e7edb77506f66f8-merged.mount: Deactivated successfully.
Nov 25 20:22:34 compute-0 podman[221265]: 2025-11-25 20:22:34.044492458 +0000 UTC m=+1.172655929 container remove ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:22:34 compute-0 systemd[1]: libpod-conmon-ffd90fbf2ac972467a974490fc631d9c679574f099a5c6e2b58079491e6d40c4.scope: Deactivated successfully.
Nov 25 20:22:34 compute-0 sudo[221148]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:22:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:22:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:22:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:22:34 compute-0 sudo[221396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:22:34 compute-0 sudo[221396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:34 compute-0 sudo[221396]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:34 compute-0 ceph-mon[75144]: pgmap v528: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:22:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:22:34 compute-0 sudo[221425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:22:34 compute-0 sudo[221425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:22:34 compute-0 sudo[221425]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:34 compute-0 sudo[220889]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v529: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:35 compute-0 sudo[221612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfjhulmxiljwkdxdyxqcghtuzunykdkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102154.949824-156-183793389929031/AnsiballZ_file.py'
Nov 25 20:22:35 compute-0 sudo[221612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:35 compute-0 python3.9[221614]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 20:22:35 compute-0 sudo[221612]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:36 compute-0 ceph-mon[75144]: pgmap v529: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:36 compute-0 sudo[221764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohhoiqvkozhuccqaaaabtdbsiltkxqbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102155.7583232-164-166684057916997/AnsiballZ_modprobe.py'
Nov 25 20:22:36 compute-0 sudo[221764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:36 compute-0 python3.9[221766]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 20:22:36 compute-0 sudo[221764]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v530: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:37 compute-0 sudo[221920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwiywynganechevehryaumznyzzvspp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102156.751377-172-188635806296104/AnsiballZ_stat.py'
Nov 25 20:22:37 compute-0 sudo[221920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:37 compute-0 python3.9[221922]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:37 compute-0 sudo[221920]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:37 compute-0 sudo[222043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpavhtifagjktojyptbzfgncwlefktey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102156.751377-172-188635806296104/AnsiballZ_copy.py'
Nov 25 20:22:37 compute-0 sudo[222043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:38 compute-0 python3.9[222045]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102156.751377-172-188635806296104/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:38 compute-0 sudo[222043]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:38 compute-0 ceph-mon[75144]: pgmap v530: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:38 compute-0 sudo[222195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfecierbmfunurzqdqrvtwlhwiqpfgie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102158.4657435-188-259322411154249/AnsiballZ_lineinfile.py'
Nov 25 20:22:38 compute-0 sudo[222195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:39 compute-0 python3.9[222197]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:39 compute-0 sudo[222195]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v531: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:39 compute-0 sudo[222347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpmbxrwlupocsfkfeigszewlmrjazmqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102159.2275238-196-39448459871955/AnsiballZ_systemd.py'
Nov 25 20:22:39 compute-0 sudo[222347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:40 compute-0 python3.9[222349]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:22:40 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 20:22:40 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 20:22:40 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 20:22:40 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 20:22:40 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 20:22:40 compute-0 ceph-mon[75144]: pgmap v531: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:40 compute-0 sudo[222347]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:40 compute-0 sudo[222503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqddjitzjfpivyceaifcldtopmliwsfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102160.516195-204-16815678990546/AnsiballZ_file.py'
Nov 25 20:22:40 compute-0 sudo[222503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v532: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:41 compute-0 python3.9[222505]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:22:41 compute-0 sudo[222503]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:41 compute-0 sudo[222655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuspdugsovstrddetpblgeqzpbxrifds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102161.4344327-213-171159659951456/AnsiballZ_stat.py'
Nov 25 20:22:41 compute-0 sudo[222655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:41 compute-0 python3.9[222657]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:22:41 compute-0 sudo[222655]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:42 compute-0 ceph-mon[75144]: pgmap v532: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:42 compute-0 sudo[222807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukrzvseefivtqmgpfbqwbtllkxcwjbbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102162.143096-222-118652371530767/AnsiballZ_stat.py'
Nov 25 20:22:42 compute-0 sudo[222807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:42 compute-0 python3.9[222809]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:22:42 compute-0 sudo[222807]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v533: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:43 compute-0 sudo[222959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btdaatthbplhuvuvlolfebdnmdshshbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102162.9204078-230-51494063143231/AnsiballZ_stat.py'
Nov 25 20:22:43 compute-0 sudo[222959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:43 compute-0 python3.9[222961]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:43 compute-0 sudo[222959]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:43 compute-0 sudo[223082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tseoyksqycrvnevlnbtquaxmqcuuqdii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102162.9204078-230-51494063143231/AnsiballZ_copy.py'
Nov 25 20:22:43 compute-0 sudo[223082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:44 compute-0 python3.9[223084]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102162.9204078-230-51494063143231/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:44 compute-0 sudo[223082]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:44 compute-0 ceph-mon[75144]: pgmap v533: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:44 compute-0 sudo[223234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkmsakoawoqbrgyjgyavzpusnudwbrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102164.3417718-245-137756632664182/AnsiballZ_command.py'
Nov 25 20:22:44 compute-0 sudo[223234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:44 compute-0 python3.9[223236]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:22:44 compute-0 sudo[223234]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v534: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:45 compute-0 sudo[223387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjclosbcqmwnlguodsealtzpwaqchjto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102165.153129-253-250098535766883/AnsiballZ_lineinfile.py'
Nov 25 20:22:45 compute-0 sudo[223387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:45 compute-0 python3.9[223389]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:45 compute-0 sudo[223387]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:46 compute-0 ceph-mon[75144]: pgmap v534: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:46 compute-0 sudo[223539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amfuygtxypnkkntuugkzhbcjuqasuxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102165.9005992-261-240313180849487/AnsiballZ_replace.py'
Nov 25 20:22:46 compute-0 sudo[223539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:46 compute-0 python3.9[223541]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:46 compute-0 sudo[223539]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v535: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:47 compute-0 sudo[223691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmgprwteyxxvqwlwjhnxmkwdhlsxqpvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102166.8747795-269-167326090736679/AnsiballZ_replace.py'
Nov 25 20:22:47 compute-0 sudo[223691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:47 compute-0 python3.9[223693]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:47 compute-0 sudo[223691]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:48 compute-0 sudo[223843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toyuxeihivspkgwtdbrviwzbqvsufgzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102167.7061532-278-96293728078708/AnsiballZ_lineinfile.py'
Nov 25 20:22:48 compute-0 sudo[223843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:48 compute-0 ceph-mon[75144]: pgmap v535: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:48 compute-0 python3.9[223845]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:48 compute-0 sudo[223843]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:48 compute-0 sudo[223995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvcfyrhirqhfvdglcmonvrvwrxjzoufp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102168.5292742-278-246586636954183/AnsiballZ_lineinfile.py'
Nov 25 20:22:48 compute-0 sudo[223995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:22:48.939 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:22:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:22:48.941 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:22:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:22:48.941 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:22:49 compute-0 python3.9[223997]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:49 compute-0 sudo[223995]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v536: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:49 compute-0 sudo[224147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igizycuekopcolqlljsvawltekqcxtjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102169.2743042-278-63814208174911/AnsiballZ_lineinfile.py'
Nov 25 20:22:49 compute-0 sudo[224147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:49 compute-0 python3.9[224149]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:49 compute-0 sudo[224147]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:50 compute-0 ceph-mon[75144]: pgmap v536: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:50 compute-0 sudo[224299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmdieuwuxdvfbdpeixrtunkuzrjtakm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102170.1033285-278-239065366310930/AnsiballZ_lineinfile.py'
Nov 25 20:22:50 compute-0 sudo[224299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:50 compute-0 python3.9[224301]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:50 compute-0 sudo[224299]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v537: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:51 compute-0 sudo[224451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuylrzilgtfwrpddtbkpqfmlsizzzptl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102170.9688263-307-254781589800943/AnsiballZ_stat.py'
Nov 25 20:22:51 compute-0 sudo[224451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:51 compute-0 python3.9[224453]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:22:51 compute-0 sudo[224451]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:52 compute-0 sudo[224605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmewgzhvwzpqnckenjuaivijfczoaxzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102171.8574843-315-98072827220133/AnsiballZ_file.py'
Nov 25 20:22:52 compute-0 sudo[224605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:52 compute-0 ceph-mon[75144]: pgmap v537: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:52 compute-0 podman[224607]: 2025-11-25 20:22:52.387549099 +0000 UTC m=+0.087057443 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 20:22:52 compute-0 python3.9[224608]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:52 compute-0 sudo[224605]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v538: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:53 compute-0 ceph-mon[75144]: pgmap v538: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:53 compute-0 sudo[224777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoctauoejebfjfyaxayjijhvdxnatccn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102172.9618928-324-129584085015926/AnsiballZ_file.py'
Nov 25 20:22:53 compute-0 sudo[224777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:53 compute-0 python3.9[224779]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:22:53 compute-0 sudo[224777]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:54 compute-0 sudo[224929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgakbdhvgsfugxgqudwwdmbyttdcreaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102173.8652833-332-197269931206248/AnsiballZ_stat.py'
Nov 25 20:22:54 compute-0 sudo[224929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:54 compute-0 python3.9[224931]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:54 compute-0 sudo[224929]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:54 compute-0 sudo[225007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkbuzavcskybphwfhvazrswhdptergv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102173.8652833-332-197269931206248/AnsiballZ_file.py'
Nov 25 20:22:54 compute-0 sudo[225007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:54 compute-0 python3.9[225009]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:22:54 compute-0 sudo[225007]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:22:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v539: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:55 compute-0 sudo[225159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afrxtehlkhprspjyrpttndramtxkfaqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102175.126747-332-187573761036050/AnsiballZ_stat.py'
Nov 25 20:22:55 compute-0 sudo[225159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:55 compute-0 python3.9[225161]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:55 compute-0 sudo[225159]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:55 compute-0 sudo[225237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veelapzzuuvocubbkephjlrrmvmyslww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102175.126747-332-187573761036050/AnsiballZ_file.py'
Nov 25 20:22:55 compute-0 sudo[225237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:56 compute-0 python3.9[225239]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:22:56 compute-0 ceph-mon[75144]: pgmap v539: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:56 compute-0 sudo[225237]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:56 compute-0 sudo[225389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohdaljaevibghtetzeuebqwaftexqyjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102176.4320395-355-59827161085098/AnsiballZ_file.py'
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:22:56 compute-0 sudo[225389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:22:56
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 25 20:22:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:22:57 compute-0 python3.9[225391]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:57 compute-0 sudo[225389]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v540: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:57 compute-0 sudo[225541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arzlnwvufwdxcfpyxsxiliwysydwdpah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102177.2813559-363-112984741816068/AnsiballZ_stat.py'
Nov 25 20:22:57 compute-0 sudo[225541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:57 compute-0 python3.9[225543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:58 compute-0 sudo[225541]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:58 compute-0 ceph-mon[75144]: pgmap v540: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:58 compute-0 sudo[225619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-towocpaouguyktfvbdhjdpnmqpokzoau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102177.2813559-363-112984741816068/AnsiballZ_file.py'
Nov 25 20:22:58 compute-0 sudo[225619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:58 compute-0 python3.9[225621]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:58 compute-0 sudo[225619]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v541: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:22:59 compute-0 sudo[225771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnmsidakniitjwavekjzqlijiuzmsiwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102178.7858663-375-110978509988611/AnsiballZ_stat.py'
Nov 25 20:22:59 compute-0 sudo[225771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:59 compute-0 python3.9[225773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:22:59 compute-0 sudo[225771]: pam_unix(sudo:session): session closed for user root
Nov 25 20:22:59 compute-0 sudo[225849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmckngtlapezbetyipxjuezralawbpyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102178.7858663-375-110978509988611/AnsiballZ_file.py'
Nov 25 20:22:59 compute-0 sudo[225849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:22:59 compute-0 python3.9[225851]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:22:59 compute-0 sudo[225849]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:00 compute-0 ceph-mon[75144]: pgmap v541: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:00 compute-0 sudo[226001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brfxxohrwminvtsoqodszizzcdwnczip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102180.1802306-387-40309139393075/AnsiballZ_systemd.py'
Nov 25 20:23:00 compute-0 sudo[226001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:00 compute-0 python3.9[226003]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:00 compute-0 systemd[1]: Reloading.
Nov 25 20:23:01 compute-0 systemd-rc-local-generator[226031]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:01 compute-0 systemd-sysv-generator[226034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v542: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:01 compute-0 sudo[226001]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:01 compute-0 sudo[226190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdsjaubhvmbtzlviulpujaomgjffyyfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102181.5539157-395-233647162591302/AnsiballZ_stat.py'
Nov 25 20:23:01 compute-0 sudo[226190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:02 compute-0 python3.9[226192]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:23:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:23:02 compute-0 sudo[226190]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:02 compute-0 ceph-mon[75144]: pgmap v542: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:02 compute-0 sudo[226268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugabfdiqsstipjzncrwvgmkygzdobrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102181.5539157-395-233647162591302/AnsiballZ_file.py'
Nov 25 20:23:02 compute-0 sudo[226268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:02 compute-0 python3.9[226270]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:02 compute-0 sudo[226268]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v543: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:03 compute-0 sudo[226437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twbrplbhgnmmphtneluhrntieidflowo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102182.977937-407-53481314792279/AnsiballZ_stat.py'
Nov 25 20:23:03 compute-0 sudo[226437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:03 compute-0 podman[226394]: 2025-11-25 20:23:03.466766417 +0000 UTC m=+0.129294233 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 25 20:23:03 compute-0 python3.9[226443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:23:03 compute-0 sudo[226437]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:03 compute-0 sudo[226524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvlsivqyfcngathnvklbgdyaipdnclkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102182.977937-407-53481314792279/AnsiballZ_file.py'
Nov 25 20:23:03 compute-0 sudo[226524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:04 compute-0 python3.9[226526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:04 compute-0 sudo[226524]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:04 compute-0 ceph-mon[75144]: pgmap v543: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:04 compute-0 sudo[226676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfokhdohujttfjsjazvruvnmtskdrblp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102184.4022331-419-144775159398827/AnsiballZ_systemd.py'
Nov 25 20:23:04 compute-0 sudo[226676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:05 compute-0 python3.9[226678]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v544: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:05 compute-0 systemd[1]: Reloading.
Nov 25 20:23:05 compute-0 systemd-rc-local-generator[226706]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:05 compute-0 systemd-sysv-generator[226709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:05 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 20:23:05 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 20:23:05 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 20:23:05 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 20:23:05 compute-0 sudo[226676]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:06 compute-0 ceph-mon[75144]: pgmap v544: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:06 compute-0 sudo[226869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isvfhktuogaslymnotbrvveaxhflosgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102185.9234035-429-140326359555031/AnsiballZ_file.py'
Nov 25 20:23:06 compute-0 sudo[226869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:06 compute-0 python3.9[226871]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:23:06 compute-0 sudo[226869]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:07 compute-0 sudo[227021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djslbmubgfiqwhucdkzrnpomqyicmmur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102186.6922016-437-47669848072910/AnsiballZ_stat.py'
Nov 25 20:23:07 compute-0 sudo[227021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v545: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:07 compute-0 python3.9[227023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:23:07 compute-0 sudo[227021]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:07 compute-0 sudo[227144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbyzdijpumlyuqchrzvxbejfdgggxyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102186.6922016-437-47669848072910/AnsiballZ_copy.py'
Nov 25 20:23:07 compute-0 sudo[227144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:07 compute-0 python3.9[227146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102186.6922016-437-47669848072910/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:23:07 compute-0 sudo[227144]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:08 compute-0 ceph-mon[75144]: pgmap v545: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:08 compute-0 sudo[227296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nthnipsmrvucepxvceyyflmktzhcvxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102188.445441-454-53107090769942/AnsiballZ_file.py'
Nov 25 20:23:08 compute-0 sudo[227296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:09 compute-0 python3.9[227298]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:23:09 compute-0 sudo[227296]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v546: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:09 compute-0 sudo[227448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtvadlmwugktflrluppsmewqjfjxdmyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102189.3141766-462-274995790545454/AnsiballZ_stat.py'
Nov 25 20:23:09 compute-0 sudo[227448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:09 compute-0 python3.9[227450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:23:09 compute-0 sudo[227448]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.094788) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102190094861, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 881, "num_deletes": 250, "total_data_size": 828330, "memory_usage": 845128, "flush_reason": "Manual Compaction"}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102190111699, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 511192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10906, "largest_seqno": 11786, "table_properties": {"data_size": 507631, "index_size": 1341, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8999, "raw_average_key_size": 19, "raw_value_size": 500086, "raw_average_value_size": 1106, "num_data_blocks": 61, "num_entries": 452, "num_filter_entries": 452, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102105, "oldest_key_time": 1764102105, "file_creation_time": 1764102190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 16941 microseconds, and 3332 cpu microseconds.
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.111755) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 511192 bytes OK
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.111779) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.113955) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.113973) EVENT_LOG_v1 {"time_micros": 1764102190113966, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.113993) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 824049, prev total WAL file size 824049, number of live WAL files 2.
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.114671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(499KB)], [29(5576KB)]
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102190114732, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6221431, "oldest_snapshot_seqno": -1}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3090 keys, 4448106 bytes, temperature: kUnknown
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102190155227, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4448106, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4426841, "index_size": 12452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 71452, "raw_average_key_size": 23, "raw_value_size": 4370926, "raw_average_value_size": 1414, "num_data_blocks": 556, "num_entries": 3090, "num_filter_entries": 3090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.155743) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4448106 bytes
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.158841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.9 rd, 109.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 5.4 +0.0 blob) out(4.2 +0.0 blob), read-write-amplify(20.9) write-amplify(8.7) OK, records in: 3564, records dropped: 474 output_compression: NoCompression
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.158877) EVENT_LOG_v1 {"time_micros": 1764102190158859, "job": 12, "event": "compaction_finished", "compaction_time_micros": 40691, "compaction_time_cpu_micros": 23181, "output_level": 6, "num_output_files": 1, "total_output_size": 4448106, "num_input_records": 3564, "num_output_records": 3090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102190159232, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102190161707, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.114543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.161762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.161769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.161783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.161787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:23:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:23:10.161790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:23:10 compute-0 ceph-mon[75144]: pgmap v546: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:10 compute-0 sudo[227571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbuslabtmfwwiyclygecbqdfnximiog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102189.3141766-462-274995790545454/AnsiballZ_copy.py'
Nov 25 20:23:10 compute-0 sudo[227571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:10 compute-0 python3.9[227573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102189.3141766-462-274995790545454/.source.json _original_basename=.d08nttgn follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:10 compute-0 sudo[227571]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v547: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:11 compute-0 sudo[227723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfltotbxytybvlcdqfljrtprrmfqdgxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102190.8566942-477-97195508721748/AnsiballZ_file.py'
Nov 25 20:23:11 compute-0 sudo[227723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:11 compute-0 python3.9[227725]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:11 compute-0 sudo[227723]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:12 compute-0 sudo[227875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxjmnzfdiqqqdkycphgecpfwlcrnqfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102191.6940167-485-188934193098495/AnsiballZ_stat.py'
Nov 25 20:23:12 compute-0 sudo[227875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:12 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 20:23:12 compute-0 sudo[227875]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:12 compute-0 ceph-mon[75144]: pgmap v547: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:12 compute-0 sudo[227999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwiotjzrffdldlzslgctgdjzgvbapsth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102191.6940167-485-188934193098495/AnsiballZ_copy.py'
Nov 25 20:23:12 compute-0 sudo[227999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:13 compute-0 sudo[227999]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v548: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:13 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 20:23:13 compute-0 sudo[228152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsiwtsnwykmkwqdcvwecdsfpvlxgecnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102193.4555779-502-110336585865241/AnsiballZ_container_config_data.py'
Nov 25 20:23:13 compute-0 sudo[228152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:14 compute-0 python3.9[228154]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 20:23:14 compute-0 sudo[228152]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:14 compute-0 ceph-mon[75144]: pgmap v548: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:15 compute-0 sudo[228304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxbyphtzyukqcgsucqdskodaesgxremt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102194.5421243-511-244511365980797/AnsiballZ_container_config_hash.py'
Nov 25 20:23:15 compute-0 sudo[228304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v549: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:15 compute-0 python3.9[228306]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:23:15 compute-0 sudo[228304]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:16 compute-0 sudo[228456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfvmnmppcgwirtjavgvjwqlvbcdrtnzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102195.5820239-520-188044055441994/AnsiballZ_podman_container_info.py'
Nov 25 20:23:16 compute-0 sudo[228456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:16 compute-0 ceph-mon[75144]: pgmap v549: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:16 compute-0 python3.9[228458]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 20:23:16 compute-0 sudo[228456]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v550: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:18 compute-0 sudo[228634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnwufavusqutpvgdcthigggdshztdasw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764102197.543628-533-242922786663599/AnsiballZ_edpm_container_manage.py'
Nov 25 20:23:18 compute-0 sudo[228634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:18 compute-0 ceph-mon[75144]: pgmap v550: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:18 compute-0 python3[228636]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:23:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v551: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:19 compute-0 podman[228650]: 2025-11-25 20:23:19.730470794 +0000 UTC m=+1.152755022 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 20:23:19 compute-0 podman[228708]: 2025-11-25 20:23:19.940705325 +0000 UTC m=+0.080755755 container create 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:23:19 compute-0 podman[228708]: 2025-11-25 20:23:19.8986518 +0000 UTC m=+0.038702270 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 20:23:19 compute-0 python3[228636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 20:23:20 compute-0 sudo[228634]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:20 compute-0 ceph-mon[75144]: pgmap v551: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:20 compute-0 sudo[228896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkdiynctfmoivwmymdvksdkrdkdrtbxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102200.2713876-541-251132738199574/AnsiballZ_stat.py'
Nov 25 20:23:20 compute-0 sudo[228896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:20 compute-0 python3.9[228898]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:23:20 compute-0 sudo[228896]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v552: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:21 compute-0 ceph-mon[75144]: pgmap v552: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:21 compute-0 sudo[229050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcaqoxxrcvchzkuauoltdowzxyjwqoqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102201.1502357-550-97705690719770/AnsiballZ_file.py'
Nov 25 20:23:21 compute-0 sudo[229050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:21 compute-0 python3.9[229052]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:21 compute-0 sudo[229050]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:21 compute-0 sudo[229126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecdepnwmmdybjfttmzijjqkjumbkgfza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102201.1502357-550-97705690719770/AnsiballZ_stat.py'
Nov 25 20:23:21 compute-0 sudo[229126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:22 compute-0 python3.9[229128]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:23:22 compute-0 sudo[229126]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:22 compute-0 sudo[229286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqmecrtqhutxjsdyyypyigszebbryfzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102202.1373627-550-225358941435383/AnsiballZ_copy.py'
Nov 25 20:23:22 compute-0 sudo[229286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:22 compute-0 podman[229251]: 2025-11-25 20:23:22.698071088 +0000 UTC m=+0.081441974 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 20:23:22 compute-0 python3.9[229294]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764102202.1373627-550-225358941435383/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:22 compute-0 sudo[229286]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v553: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:23 compute-0 sudo[229370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdbukuxogofbtvhofzgknmdnmfizngm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102202.1373627-550-225358941435383/AnsiballZ_systemd.py'
Nov 25 20:23:23 compute-0 sudo[229370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:23 compute-0 python3.9[229372]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:23:23 compute-0 systemd[1]: Reloading.
Nov 25 20:23:23 compute-0 systemd-rc-local-generator[229397]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:23 compute-0 systemd-sysv-generator[229402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:23 compute-0 sudo[229370]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:24 compute-0 ceph-mon[75144]: pgmap v553: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:24 compute-0 sudo[229480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmuvrguewbyfihqhmyjhzpbxdqjcpxga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102202.1373627-550-225358941435383/AnsiballZ_systemd.py'
Nov 25 20:23:24 compute-0 sudo[229480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:24 compute-0 python3.9[229482]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:24 compute-0 systemd[1]: Reloading.
Nov 25 20:23:24 compute-0 systemd-sysv-generator[229510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:24 compute-0 systemd-rc-local-generator[229506]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:25 compute-0 systemd[1]: Starting multipathd container...
Nov 25 20:23:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v554: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52e671911a1266391c685c8cbce0b0ea08755e0a339e974766d6b39c45d5b65/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52e671911a1266391c685c8cbce0b0ea08755e0a339e974766d6b39c45d5b65/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d.
Nov 25 20:23:25 compute-0 podman[229521]: 2025-11-25 20:23:25.236032431 +0000 UTC m=+0.172940053 container init 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:23:25 compute-0 multipathd[229534]: + sudo -E kolla_set_configs
Nov 25 20:23:25 compute-0 podman[229521]: 2025-11-25 20:23:25.275859982 +0000 UTC m=+0.212767544 container start 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:23:25 compute-0 podman[229521]: multipathd
Nov 25 20:23:25 compute-0 systemd[1]: Started multipathd container.
Nov 25 20:23:25 compute-0 sudo[229543]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 20:23:25 compute-0 sudo[229543]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 20:23:25 compute-0 sudo[229543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 20:23:25 compute-0 sudo[229480]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:25 compute-0 multipathd[229534]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:23:25 compute-0 multipathd[229534]: INFO:__main__:Validating config file
Nov 25 20:23:25 compute-0 multipathd[229534]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:23:25 compute-0 multipathd[229534]: INFO:__main__:Writing out command to execute
Nov 25 20:23:25 compute-0 podman[229544]: 2025-11-25 20:23:25.3796891 +0000 UTC m=+0.084484280 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:23:25 compute-0 sudo[229543]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:25 compute-0 multipathd[229534]: ++ cat /run_command
Nov 25 20:23:25 compute-0 systemd[1]: 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d-db5746c2016553e.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:23:25 compute-0 systemd[1]: 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d-db5746c2016553e.service: Failed with result 'exit-code'.
Nov 25 20:23:25 compute-0 multipathd[229534]: + CMD='/usr/sbin/multipathd -d'
Nov 25 20:23:25 compute-0 multipathd[229534]: + ARGS=
Nov 25 20:23:25 compute-0 multipathd[229534]: + sudo kolla_copy_cacerts
Nov 25 20:23:25 compute-0 sudo[229590]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 20:23:25 compute-0 sudo[229590]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 20:23:25 compute-0 sudo[229590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 20:23:25 compute-0 sudo[229590]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:25 compute-0 multipathd[229534]: + [[ ! -n '' ]]
Nov 25 20:23:25 compute-0 multipathd[229534]: + . kolla_extend_start
Nov 25 20:23:25 compute-0 multipathd[229534]: Running command: '/usr/sbin/multipathd -d'
Nov 25 20:23:25 compute-0 multipathd[229534]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 20:23:25 compute-0 multipathd[229534]: + umask 0022
Nov 25 20:23:25 compute-0 multipathd[229534]: + exec /usr/sbin/multipathd -d
Nov 25 20:23:25 compute-0 multipathd[229534]: 3361.135901 | --------start up--------
Nov 25 20:23:25 compute-0 multipathd[229534]: 3361.135918 | read /etc/multipath.conf
Nov 25 20:23:25 compute-0 multipathd[229534]: 3361.144047 | path checkers start up
Nov 25 20:23:25 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 20:23:25 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 20:23:26 compute-0 python3.9[229729]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:23:26 compute-0 ceph-mon[75144]: pgmap v554: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:26 compute-0 sudo[229881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkakzudrebtnlwymqcmmlatyieejvom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102206.3921974-586-60251228903735/AnsiballZ_command.py'
Nov 25 20:23:26 compute-0 sudo[229881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:23:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:23:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:23:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:23:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:23:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:23:26 compute-0 python3.9[229883]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:23:27 compute-0 sudo[229881]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v555: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:27 compute-0 sudo[230046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgpbbdpfunsreigitxcowctlepnzypi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102207.3671362-594-202077388297274/AnsiballZ_systemd.py'
Nov 25 20:23:27 compute-0 sudo[230046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:28 compute-0 python3.9[230048]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:23:28 compute-0 ceph-mon[75144]: pgmap v555: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v556: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:29 compute-0 systemd[1]: Stopping multipathd container...
Nov 25 20:23:29 compute-0 multipathd[229534]: 3364.948383 | exit (signal)
Nov 25 20:23:29 compute-0 multipathd[229534]: 3364.948484 | --------shut down-------
Nov 25 20:23:29 compute-0 systemd[1]: libpod-06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d.scope: Deactivated successfully.
Nov 25 20:23:29 compute-0 podman[230052]: 2025-11-25 20:23:29.264925128 +0000 UTC m=+0.070698879 container died 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:23:29 compute-0 systemd[1]: 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d-db5746c2016553e.timer: Deactivated successfully.
Nov 25 20:23:29 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d.
Nov 25 20:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d-userdata-shm.mount: Deactivated successfully.
Nov 25 20:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52e671911a1266391c685c8cbce0b0ea08755e0a339e974766d6b39c45d5b65-merged.mount: Deactivated successfully.
Nov 25 20:23:29 compute-0 podman[230052]: 2025-11-25 20:23:29.442862022 +0000 UTC m=+0.248635723 container cleanup 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 20:23:29 compute-0 podman[230052]: multipathd
Nov 25 20:23:29 compute-0 podman[230082]: multipathd
Nov 25 20:23:29 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 20:23:29 compute-0 systemd[1]: Stopped multipathd container.
Nov 25 20:23:29 compute-0 systemd[1]: Starting multipathd container...
Nov 25 20:23:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52e671911a1266391c685c8cbce0b0ea08755e0a339e974766d6b39c45d5b65/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52e671911a1266391c685c8cbce0b0ea08755e0a339e974766d6b39c45d5b65/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d.
Nov 25 20:23:29 compute-0 podman[230094]: 2025-11-25 20:23:29.693318295 +0000 UTC m=+0.128547572 container init 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 20:23:29 compute-0 multipathd[230109]: + sudo -E kolla_set_configs
Nov 25 20:23:29 compute-0 podman[230094]: 2025-11-25 20:23:29.725720816 +0000 UTC m=+0.160950013 container start 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 20:23:29 compute-0 podman[230094]: multipathd
Nov 25 20:23:29 compute-0 sudo[230115]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 20:23:29 compute-0 sudo[230115]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 20:23:29 compute-0 sudo[230115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 20:23:29 compute-0 systemd[1]: Started multipathd container.
Nov 25 20:23:29 compute-0 sudo[230046]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:29 compute-0 multipathd[230109]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:23:29 compute-0 multipathd[230109]: INFO:__main__:Validating config file
Nov 25 20:23:29 compute-0 multipathd[230109]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:23:29 compute-0 multipathd[230109]: INFO:__main__:Writing out command to execute
Nov 25 20:23:29 compute-0 sudo[230115]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:29 compute-0 multipathd[230109]: ++ cat /run_command
Nov 25 20:23:29 compute-0 multipathd[230109]: + CMD='/usr/sbin/multipathd -d'
Nov 25 20:23:29 compute-0 multipathd[230109]: + ARGS=
Nov 25 20:23:29 compute-0 multipathd[230109]: + sudo kolla_copy_cacerts
Nov 25 20:23:29 compute-0 podman[230116]: 2025-11-25 20:23:29.816962737 +0000 UTC m=+0.082600007 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:23:29 compute-0 systemd[1]: 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d-391903042f914ac.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:23:29 compute-0 systemd[1]: 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d-391903042f914ac.service: Failed with result 'exit-code'.
Nov 25 20:23:29 compute-0 sudo[230142]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 20:23:29 compute-0 sudo[230142]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 20:23:29 compute-0 sudo[230142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 20:23:29 compute-0 sudo[230142]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:29 compute-0 multipathd[230109]: + [[ ! -n '' ]]
Nov 25 20:23:29 compute-0 multipathd[230109]: + . kolla_extend_start
Nov 25 20:23:29 compute-0 multipathd[230109]: Running command: '/usr/sbin/multipathd -d'
Nov 25 20:23:29 compute-0 multipathd[230109]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 20:23:29 compute-0 multipathd[230109]: + umask 0022
Nov 25 20:23:29 compute-0 multipathd[230109]: + exec /usr/sbin/multipathd -d
Nov 25 20:23:29 compute-0 multipathd[230109]: 3365.591188 | --------start up--------
Nov 25 20:23:29 compute-0 multipathd[230109]: 3365.591214 | read /etc/multipath.conf
Nov 25 20:23:29 compute-0 multipathd[230109]: 3365.599901 | path checkers start up
Nov 25 20:23:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:30 compute-0 ceph-mon[75144]: pgmap v556: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:30 compute-0 sudo[230299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqjixudwzpmwlucdksvmbekhrgaxkob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102210.0316288-602-256306648315984/AnsiballZ_file.py'
Nov 25 20:23:30 compute-0 sudo[230299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:30 compute-0 python3.9[230301]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:30 compute-0 sudo[230299]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v557: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:31 compute-0 sudo[230451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohmpaagnfkbqmygeblfalfjozvwfqaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102211.1152298-614-260554316754784/AnsiballZ_file.py'
Nov 25 20:23:31 compute-0 sudo[230451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:31 compute-0 python3.9[230453]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 20:23:31 compute-0 sudo[230451]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:32 compute-0 ceph-mon[75144]: pgmap v557: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:32 compute-0 sudo[230603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aipafnmdckhquwklxbuksfbfzelpdbov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102211.966461-622-108119121786273/AnsiballZ_modprobe.py'
Nov 25 20:23:32 compute-0 sudo[230603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:32 compute-0 python3.9[230605]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 20:23:32 compute-0 kernel: Key type psk registered
Nov 25 20:23:32 compute-0 sudo[230603]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v558: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:33 compute-0 sudo[230767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkdccmlywmxbhwcgohsrqqygzfvziujv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102212.8568742-630-58660616198771/AnsiballZ_stat.py'
Nov 25 20:23:33 compute-0 sudo[230767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:33 compute-0 python3.9[230769]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:23:33 compute-0 sudo[230767]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:34 compute-0 sudo[230912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aspkdjjmdbacshxecljznttkjifsxxio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102212.8568742-630-58660616198771/AnsiballZ_copy.py'
Nov 25 20:23:34 compute-0 podman[230846]: 2025-11-25 20:23:34.055607441 +0000 UTC m=+0.143044974 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 20:23:34 compute-0 sudo[230912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:34 compute-0 python3.9[230917]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764102212.8568742-630-58660616198771/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:34 compute-0 sudo[230912]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:34 compute-0 ceph-mon[75144]: pgmap v558: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:34 compute-0 sudo[230918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:34 compute-0 sudo[230918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:34 compute-0 sudo[230918]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:34 compute-0 sudo[230955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:23:34 compute-0 sudo[230955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:34 compute-0 sudo[230955]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:34 compute-0 sudo[230992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:34 compute-0 sudo[230992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:34 compute-0 sudo[230992]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:34 compute-0 sudo[231017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:23:34 compute-0 sudo[231017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:35 compute-0 sudo[231184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoupigefgmzrrtfszlmodertqanqvpat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102214.6178746-646-275635976828516/AnsiballZ_lineinfile.py'
Nov 25 20:23:35 compute-0 sudo[231184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v559: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:35 compute-0 sudo[231017]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:23:35 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:23:35 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:23:35 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:23:35 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev f20f2024-0fa7-4291-89ee-b6c89a7aa2c9 does not exist
Nov 25 20:23:35 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 0fb286d4-cbc2-46a1-b10b-e36c600078a7 does not exist
Nov 25 20:23:35 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5f804d9b-00eb-4833-b163-d37328caa1e8 does not exist
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:23:35 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:23:35 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:23:35 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:23:35 compute-0 python3.9[231186]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:35 compute-0 sudo[231184]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:23:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:23:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:23:35 compute-0 sudo[231201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:35 compute-0 sudo[231201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:35 compute-0 sudo[231201]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:35 compute-0 sudo[231230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:23:35 compute-0 sudo[231230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:35 compute-0 sudo[231230]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:35 compute-0 sudo[231275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:35 compute-0 sudo[231275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:35 compute-0 sudo[231275]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:35 compute-0 sudo[231300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:23:35 compute-0 sudo[231300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:35 compute-0 podman[231464]: 2025-11-25 20:23:35.886583784 +0000 UTC m=+0.060600253 container create 9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:23:35 compute-0 systemd[1]: Started libpod-conmon-9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344.scope.
Nov 25 20:23:35 compute-0 sudo[231504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjqxamlaisnxvznbnpwkacyvjbelsea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102215.5188675-654-272870411643048/AnsiballZ_systemd.py'
Nov 25 20:23:35 compute-0 sudo[231504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:35 compute-0 podman[231464]: 2025-11-25 20:23:35.856999283 +0000 UTC m=+0.031015852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:23:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:35 compute-0 podman[231464]: 2025-11-25 20:23:35.996739292 +0000 UTC m=+0.170755841 container init 9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:23:36 compute-0 podman[231464]: 2025-11-25 20:23:36.009839305 +0000 UTC m=+0.183855804 container start 9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elgamal, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:23:36 compute-0 podman[231464]: 2025-11-25 20:23:36.017501432 +0000 UTC m=+0.191518021 container attach 9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elgamal, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:23:36 compute-0 nice_elgamal[231508]: 167 167
Nov 25 20:23:36 compute-0 systemd[1]: libpod-9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344.scope: Deactivated successfully.
Nov 25 20:23:36 compute-0 conmon[231508]: conmon 9af2d0dc310b5e259270 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344.scope/container/memory.events
Nov 25 20:23:36 compute-0 podman[231464]: 2025-11-25 20:23:36.021747273 +0000 UTC m=+0.195763792 container died 9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9490d1ced29e88ebd415917843e664d28115cbd1aeb572e588614ebeeddaf1c-merged.mount: Deactivated successfully.
Nov 25 20:23:36 compute-0 podman[231464]: 2025-11-25 20:23:36.07833784 +0000 UTC m=+0.252354319 container remove 9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:23:36 compute-0 systemd[1]: libpod-conmon-9af2d0dc310b5e2592702e4ae3f845f9290ca165a5e116390006279308486344.scope: Deactivated successfully.
Nov 25 20:23:36 compute-0 podman[231533]: 2025-11-25 20:23:36.283003302 +0000 UTC m=+0.063555045 container create aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:23:36 compute-0 python3.9[231510]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:23:36 compute-0 ceph-mon[75144]: pgmap v559: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:36 compute-0 podman[231533]: 2025-11-25 20:23:36.253493495 +0000 UTC m=+0.034045268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:23:36 compute-0 systemd[1]: Started libpod-conmon-aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0.scope.
Nov 25 20:23:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:36 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 20:23:36 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 20:23:36 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 20:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2aa04a955122df8460e2951c69ca7b94325470ac9270125c47ed45cd9d4f3ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:36 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 20:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2aa04a955122df8460e2951c69ca7b94325470ac9270125c47ed45cd9d4f3ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2aa04a955122df8460e2951c69ca7b94325470ac9270125c47ed45cd9d4f3ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2aa04a955122df8460e2951c69ca7b94325470ac9270125c47ed45cd9d4f3ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2aa04a955122df8460e2951c69ca7b94325470ac9270125c47ed45cd9d4f3ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:36 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 20:23:36 compute-0 podman[231533]: 2025-11-25 20:23:36.406870131 +0000 UTC m=+0.187421874 container init aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:23:36 compute-0 podman[231533]: 2025-11-25 20:23:36.420521248 +0000 UTC m=+0.201072991 container start aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:23:36 compute-0 podman[231533]: 2025-11-25 20:23:36.426619952 +0000 UTC m=+0.207171685 container attach aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:23:36 compute-0 sudo[231504]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v560: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:37 compute-0 sudo[231714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujmtlwroyvoqeoggtjhqrpclxlkaqhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102216.7615535-662-80164977553126/AnsiballZ_dnf.py'
Nov 25 20:23:37 compute-0 sudo[231714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:37 compute-0 ceph-mon[75144]: pgmap v560: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:37 compute-0 python3.9[231717]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:23:37 compute-0 nervous_williamson[231550]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:23:37 compute-0 nervous_williamson[231550]: --> relative data size: 1.0
Nov 25 20:23:37 compute-0 nervous_williamson[231550]: --> All data devices are unavailable
Nov 25 20:23:37 compute-0 systemd[1]: libpod-aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0.scope: Deactivated successfully.
Nov 25 20:23:37 compute-0 podman[231533]: 2025-11-25 20:23:37.667632269 +0000 UTC m=+1.448183982 container died aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:23:37 compute-0 systemd[1]: libpod-aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0.scope: Consumed 1.165s CPU time.
Nov 25 20:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2aa04a955122df8460e2951c69ca7b94325470ac9270125c47ed45cd9d4f3ad-merged.mount: Deactivated successfully.
Nov 25 20:23:37 compute-0 podman[231533]: 2025-11-25 20:23:37.738593414 +0000 UTC m=+1.519145137 container remove aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:23:37 compute-0 systemd[1]: libpod-conmon-aed0b39cbcbfbfd95fd84bc7ff6164bcf30701707cc44ecb908996712e2162f0.scope: Deactivated successfully.
Nov 25 20:23:37 compute-0 sudo[231300]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:37 compute-0 sudo[231746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:37 compute-0 sudo[231746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:37 compute-0 sudo[231746]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:37 compute-0 sudo[231771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:23:37 compute-0 sudo[231771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:37 compute-0 sudo[231771]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:38 compute-0 sudo[231796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:38 compute-0 sudo[231796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:38 compute-0 sudo[231796]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:38 compute-0 sudo[231821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:23:38 compute-0 sudo[231821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.595750079 +0000 UTC m=+0.068905199 container create caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:23:38 compute-0 systemd[1]: Started libpod-conmon-caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150.scope.
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.568949837 +0000 UTC m=+0.042105017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:23:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.703002054 +0000 UTC m=+0.176157184 container init caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.718734192 +0000 UTC m=+0.191889312 container start caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.723716123 +0000 UTC m=+0.196871253 container attach caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:23:38 compute-0 nostalgic_goodall[231904]: 167 167
Nov 25 20:23:38 compute-0 systemd[1]: libpod-caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150.scope: Deactivated successfully.
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.728143138 +0000 UTC m=+0.201298268 container died caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d3efc7e139da7e11992ea154729d715fb8d8b5cb24c5f7f158166775d8da4e-merged.mount: Deactivated successfully.
Nov 25 20:23:38 compute-0 podman[231888]: 2025-11-25 20:23:38.772062146 +0000 UTC m=+0.245217226 container remove caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:23:38 compute-0 systemd[1]: libpod-conmon-caf37cf3e873f0b007d7bc3a518c37488f91b419ba1a3998d52ba78276c1b150.scope: Deactivated successfully.
Nov 25 20:23:39 compute-0 podman[231929]: 2025-11-25 20:23:39.035176289 +0000 UTC m=+0.070696139 container create 790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:23:39 compute-0 systemd[1]: Started libpod-conmon-790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b.scope.
Nov 25 20:23:39 compute-0 podman[231929]: 2025-11-25 20:23:39.004372774 +0000 UTC m=+0.039892694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:23:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07518782f07eb3389d2e33f2df4b56b6df3233a39a3ed14fbfa3b310ee34e31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07518782f07eb3389d2e33f2df4b56b6df3233a39a3ed14fbfa3b310ee34e31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07518782f07eb3389d2e33f2df4b56b6df3233a39a3ed14fbfa3b310ee34e31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07518782f07eb3389d2e33f2df4b56b6df3233a39a3ed14fbfa3b310ee34e31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:39 compute-0 podman[231929]: 2025-11-25 20:23:39.138178175 +0000 UTC m=+0.173698045 container init 790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:23:39 compute-0 podman[231929]: 2025-11-25 20:23:39.151937925 +0000 UTC m=+0.187457785 container start 790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:23:39 compute-0 podman[231929]: 2025-11-25 20:23:39.157558564 +0000 UTC m=+0.193078424 container attach 790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:23:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v561: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:39 compute-0 systemd[1]: Reloading.
Nov 25 20:23:39 compute-0 happy_diffie[231946]: {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:     "0": [
Nov 25 20:23:39 compute-0 happy_diffie[231946]:         {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "devices": [
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "/dev/loop3"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             ],
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_name": "ceph_lv0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_size": "21470642176",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "name": "ceph_lv0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "tags": {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cluster_name": "ceph",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.crush_device_class": "",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.encrypted": "0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osd_id": "0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.type": "block",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.vdo": "0"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             },
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "type": "block",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "vg_name": "ceph_vg0"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:         }
Nov 25 20:23:39 compute-0 happy_diffie[231946]:     ],
Nov 25 20:23:39 compute-0 happy_diffie[231946]:     "1": [
Nov 25 20:23:39 compute-0 happy_diffie[231946]:         {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "devices": [
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "/dev/loop4"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             ],
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_name": "ceph_lv1",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_size": "21470642176",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "name": "ceph_lv1",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "tags": {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cluster_name": "ceph",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.crush_device_class": "",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.encrypted": "0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osd_id": "1",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.type": "block",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.vdo": "0"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             },
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "type": "block",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "vg_name": "ceph_vg1"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:         }
Nov 25 20:23:39 compute-0 happy_diffie[231946]:     ],
Nov 25 20:23:39 compute-0 happy_diffie[231946]:     "2": [
Nov 25 20:23:39 compute-0 happy_diffie[231946]:         {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "devices": [
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "/dev/loop5"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             ],
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_name": "ceph_lv2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_size": "21470642176",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "name": "ceph_lv2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "tags": {
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.cluster_name": "ceph",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.crush_device_class": "",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.encrypted": "0",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osd_id": "2",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.type": "block",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:                 "ceph.vdo": "0"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             },
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "type": "block",
Nov 25 20:23:39 compute-0 happy_diffie[231946]:             "vg_name": "ceph_vg2"
Nov 25 20:23:39 compute-0 happy_diffie[231946]:         }
Nov 25 20:23:39 compute-0 happy_diffie[231946]:     ]
Nov 25 20:23:39 compute-0 happy_diffie[231946]: }
Nov 25 20:23:39 compute-0 podman[231929]: 2025-11-25 20:23:39.958534123 +0000 UTC m=+0.994053953 container died 790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:23:39 compute-0 systemd-rc-local-generator[231976]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:39 compute-0 systemd-sysv-generator[231982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:40 compute-0 systemd[1]: libpod-790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b.scope: Deactivated successfully.
Nov 25 20:23:40 compute-0 systemd[1]: Reloading.
Nov 25 20:23:40 compute-0 ceph-mon[75144]: pgmap v561: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:40 compute-0 podman[231929]: 2025-11-25 20:23:40.262675492 +0000 UTC m=+1.298195322 container remove 790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:23:40 compute-0 sudo[231821]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:40 compute-0 systemd-rc-local-generator[232038]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:40 compute-0 systemd-sysv-generator[232042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07518782f07eb3389d2e33f2df4b56b6df3233a39a3ed14fbfa3b310ee34e31-merged.mount: Deactivated successfully.
Nov 25 20:23:40 compute-0 systemd[1]: libpod-conmon-790231c53a708c6c085d99f03f5d1bd139f6040055de3599bf5283fd5d1f5b3b.scope: Deactivated successfully.
Nov 25 20:23:40 compute-0 sudo[232007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:40 compute-0 sudo[232007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:40 compute-0 sudo[232007]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:40 compute-0 sudo[232067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:23:40 compute-0 sudo[232067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:40 compute-0 sudo[232067]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:40 compute-0 sudo[232093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:40 compute-0 sudo[232093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:40 compute-0 sudo[232093]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:40 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 20:23:40 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 20:23:40 compute-0 sudo[232124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:23:40 compute-0 sudo[232124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:40 compute-0 lvm[232180]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 20:23:40 compute-0 lvm[232180]: VG ceph_vg1 finished
Nov 25 20:23:40 compute-0 lvm[232179]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 20:23:40 compute-0 lvm[232179]: VG ceph_vg2 finished
Nov 25 20:23:40 compute-0 lvm[232178]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 20:23:40 compute-0 lvm[232178]: VG ceph_vg0 finished
Nov 25 20:23:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:23:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:23:40 compute-0 systemd[1]: Reloading.
Nov 25 20:23:41 compute-0 systemd-rc-local-generator[232273]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:41 compute-0 systemd-sysv-generator[232276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v562: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.174508399 +0000 UTC m=+0.048100497 container create 4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lumiere, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.15340874 +0000 UTC m=+0.027000918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:23:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 20:23:41 compute-0 systemd[1]: Started libpod-conmon-4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7.scope.
Nov 25 20:23:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.383715461 +0000 UTC m=+0.257307579 container init 4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lumiere, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.389946128 +0000 UTC m=+0.263538226 container start 4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.392749357 +0000 UTC m=+0.266341455 container attach 4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:23:41 compute-0 jolly_lumiere[232459]: 167 167
Nov 25 20:23:41 compute-0 systemd[1]: libpod-4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7.scope: Deactivated successfully.
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.396598647 +0000 UTC m=+0.270190745 container died 4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-299b1c0c27f743d8f88a3df29e8ad2f2c4dea9659ecd29352a6c8a9bb317e25a-merged.mount: Deactivated successfully.
Nov 25 20:23:41 compute-0 podman[232280]: 2025-11-25 20:23:41.432832987 +0000 UTC m=+0.306425085 container remove 4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lumiere, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:23:41 compute-0 systemd[1]: libpod-conmon-4472ee03d2e78ac4daeb2c6bda5181768bc429aeb8a244bffecce22ac2e425f7.scope: Deactivated successfully.
Nov 25 20:23:41 compute-0 podman[232707]: 2025-11-25 20:23:41.580445148 +0000 UTC m=+0.037766563 container create e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:23:41 compute-0 systemd[1]: Started libpod-conmon-e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788.scope.
Nov 25 20:23:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:23:41 compute-0 podman[232707]: 2025-11-25 20:23:41.564441044 +0000 UTC m=+0.021762469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e701154c71209a1fa6410d1e236189f44d1151ed3416769e3a3a82f763770a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e701154c71209a1fa6410d1e236189f44d1151ed3416769e3a3a82f763770a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e701154c71209a1fa6410d1e236189f44d1151ed3416769e3a3a82f763770a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e701154c71209a1fa6410d1e236189f44d1151ed3416769e3a3a82f763770a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:23:41 compute-0 podman[232707]: 2025-11-25 20:23:41.69001772 +0000 UTC m=+0.147339145 container init e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 20:23:41 compute-0 podman[232707]: 2025-11-25 20:23:41.698401478 +0000 UTC m=+0.155722923 container start e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:23:41 compute-0 podman[232707]: 2025-11-25 20:23:41.70265953 +0000 UTC m=+0.159980955 container attach e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:23:41 compute-0 sudo[231714]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:42 compute-0 ceph-mon[75144]: pgmap v562: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:23:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:23:42 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.554s CPU time.
Nov 25 20:23:42 compute-0 systemd[1]: run-r85963da2bbaf4e3999f6a365fa871062.service: Deactivated successfully.
Nov 25 20:23:42 compute-0 sudo[233627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbxzqoibtydomxhcwbjisuyctpnboliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102222.053331-670-240839170903603/AnsiballZ_systemd_service.py'
Nov 25 20:23:42 compute-0 sudo[233627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:42 compute-0 magical_mayer[232835]: {
Nov 25 20:23:42 compute-0 magical_mayer[232835]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "osd_id": 2,
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "type": "bluestore"
Nov 25 20:23:42 compute-0 magical_mayer[232835]:     },
Nov 25 20:23:42 compute-0 magical_mayer[232835]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "osd_id": 1,
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "type": "bluestore"
Nov 25 20:23:42 compute-0 magical_mayer[232835]:     },
Nov 25 20:23:42 compute-0 magical_mayer[232835]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "osd_id": 0,
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:23:42 compute-0 magical_mayer[232835]:         "type": "bluestore"
Nov 25 20:23:42 compute-0 magical_mayer[232835]:     }
Nov 25 20:23:42 compute-0 magical_mayer[232835]: }
Nov 25 20:23:42 compute-0 python3.9[233632]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:23:42 compute-0 systemd[1]: libpod-e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788.scope: Deactivated successfully.
Nov 25 20:23:42 compute-0 systemd[1]: libpod-e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788.scope: Consumed 1.027s CPU time.
Nov 25 20:23:42 compute-0 podman[232707]: 2025-11-25 20:23:42.72303573 +0000 UTC m=+1.180357175 container died e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:23:42 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 25 20:23:42 compute-0 iscsid[220598]: iscsid shutting down.
Nov 25 20:23:42 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 20:23:42 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 25 20:23:42 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 20:23:42 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 20:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e701154c71209a1fa6410d1e236189f44d1151ed3416769e3a3a82f763770a1-merged.mount: Deactivated successfully.
Nov 25 20:23:42 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 20:23:42 compute-0 podman[232707]: 2025-11-25 20:23:42.795514759 +0000 UTC m=+1.252836184 container remove e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:23:42 compute-0 systemd[1]: libpod-conmon-e5ede4b13aeb3bc7919d33255ed572bdf287e89fd707da6f98f0803b20623788.scope: Deactivated successfully.
Nov 25 20:23:42 compute-0 sudo[233627]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:42 compute-0 sudo[232124]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:23:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:23:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:23:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:23:42 compute-0 sudo[233670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:23:42 compute-0 sudo[233670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:42 compute-0 sudo[233670]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:42 compute-0 sudo[233718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:23:42 compute-0 sudo[233718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:23:42 compute-0 sudo[233718]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v563: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:43 compute-0 python3.9[233868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:23:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:23:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:23:43 compute-0 ceph-mon[75144]: pgmap v563: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:44 compute-0 sudo[234022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbooiqbfrsrhcqcsklawzaskbzxthho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102224.3037968-688-206157011735344/AnsiballZ_file.py'
Nov 25 20:23:44 compute-0 sudo[234022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:44 compute-0 python3.9[234024]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:23:44 compute-0 sudo[234022]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v564: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:45 compute-0 sudo[234174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptedxaqpxpuejzcgkyqpujjhefqmvdmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102225.3305306-699-83806991097524/AnsiballZ_systemd_service.py'
Nov 25 20:23:45 compute-0 sudo[234174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:46 compute-0 python3.9[234176]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:23:46 compute-0 systemd[1]: Reloading.
Nov 25 20:23:46 compute-0 systemd-sysv-generator[234207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:23:46 compute-0 systemd-rc-local-generator[234204]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:23:46 compute-0 ceph-mon[75144]: pgmap v564: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:46 compute-0 sudo[234174]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v565: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:47 compute-0 python3.9[234361]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:23:47 compute-0 network[234378]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:23:47 compute-0 network[234379]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:23:47 compute-0 network[234380]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:23:48 compute-0 ceph-mon[75144]: pgmap v565: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:23:48.940 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:23:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:23:48.941 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:23:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:23:48.942 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:23:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v566: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:50 compute-0 ceph-mon[75144]: pgmap v566: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v567: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:52 compute-0 ceph-mon[75144]: pgmap v567: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:53 compute-0 podman[234536]: 2025-11-25 20:23:53.007457384 +0000 UTC m=+0.104265362 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:23:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v568: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:53 compute-0 sudo[234671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heyfiinkdlixlkdcyacijswdmmzwhmnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102232.9114268-718-241643420670143/AnsiballZ_systemd_service.py'
Nov 25 20:23:53 compute-0 sudo[234671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:53 compute-0 python3.9[234673]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:53 compute-0 sudo[234671]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:54 compute-0 sudo[234824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zskiqtifrpsgnryvivajkapyqzequrmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102233.8904033-718-70784700263185/AnsiballZ_systemd_service.py'
Nov 25 20:23:54 compute-0 sudo[234824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:54 compute-0 ceph-mon[75144]: pgmap v568: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:54 compute-0 python3.9[234826]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:54 compute-0 sudo[234824]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:23:55 compute-0 sudo[234977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkqmzrotxrtayfmjnvbkwmmnlsyckxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102234.796837-718-98393508513306/AnsiballZ_systemd_service.py'
Nov 25 20:23:55 compute-0 sudo[234977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v569: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:55 compute-0 python3.9[234979]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:55 compute-0 sudo[234977]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:55 compute-0 sudo[235130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfcvjwnzhurjrrjhptejawespsavanqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102235.6164722-718-5818023538661/AnsiballZ_systemd_service.py'
Nov 25 20:23:55 compute-0 sudo[235130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:56 compute-0 python3.9[235132]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:56 compute-0 sudo[235130]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:56 compute-0 ceph-mon[75144]: pgmap v569: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:23:56 compute-0 sudo[235283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcwqjcglismbmjgskbajvmcjxcnggniy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102236.5348709-718-83575769316067/AnsiballZ_systemd_service.py'
Nov 25 20:23:56 compute-0 sudo[235283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:23:56
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms']
Nov 25 20:23:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:23:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v570: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:57 compute-0 python3.9[235285]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:57 compute-0 sudo[235283]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:57 compute-0 ceph-mon[75144]: pgmap v570: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:57 compute-0 sudo[235436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvtfpbtcytjisdgxqmxgouonzcccvpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102237.4142559-718-99255205381969/AnsiballZ_systemd_service.py'
Nov 25 20:23:57 compute-0 sudo[235436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:58 compute-0 python3.9[235438]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:58 compute-0 sudo[235436]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:58 compute-0 sudo[235589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjkqcnzdquqpalnowuovfkposeilbxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102238.3665216-718-123333504524381/AnsiballZ_systemd_service.py'
Nov 25 20:23:58 compute-0 sudo[235589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:23:59 compute-0 python3.9[235591]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:23:59 compute-0 sudo[235589]: pam_unix(sudo:session): session closed for user root
Nov 25 20:23:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v571: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:23:59 compute-0 sudo[235742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veydzqfjgowqiybmglevkepwlcbwlfah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102239.3543825-718-136641679663848/AnsiballZ_systemd_service.py'
Nov 25 20:23:59 compute-0 sudo[235742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:00 compute-0 podman[235745]: 2025-11-25 20:24:00.000978444 +0000 UTC m=+0.091869250 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:24:00 compute-0 python3.9[235744]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:24:00 compute-0 sudo[235742]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:00 compute-0 ceph-mon[75144]: pgmap v571: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:00 compute-0 sudo[235913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayediyxukbupbmuqhkplppaysgydjbkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102240.491117-777-222618941162952/AnsiballZ_file.py'
Nov 25 20:24:00 compute-0 sudo[235913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:01 compute-0 python3.9[235915]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:01 compute-0 sudo[235913]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v572: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:01 compute-0 sudo[236065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulndcoplgofqjfrgbgnpguoyabvescyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102241.2884042-777-270571168879652/AnsiballZ_file.py'
Nov 25 20:24:01 compute-0 sudo[236065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:01 compute-0 python3.9[236067]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:01 compute-0 sudo[236065]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:24:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:24:02 compute-0 ceph-mon[75144]: pgmap v572: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:02 compute-0 sudo[236217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqwsuuncmxbrtcrlnmzmwjgmmblkrubs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102242.0628808-777-270611721219857/AnsiballZ_file.py'
Nov 25 20:24:02 compute-0 sudo[236217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:02 compute-0 python3.9[236219]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:02 compute-0 sudo[236217]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:03 compute-0 sudo[236369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mojykcltivwksiskymhdntxfewvdpjek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102242.8133311-777-227714893953151/AnsiballZ_file.py'
Nov 25 20:24:03 compute-0 sudo[236369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v573: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:03 compute-0 python3.9[236371]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:03 compute-0 sudo[236369]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:03 compute-0 sudo[236521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpxtfhnryvfclnvxssukmpgttikslfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102243.6212523-777-148520324245795/AnsiballZ_file.py'
Nov 25 20:24:03 compute-0 sudo[236521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:04 compute-0 python3.9[236523]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:04 compute-0 sudo[236521]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:04 compute-0 ceph-mon[75144]: pgmap v573: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:04 compute-0 sudo[236687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycnenlsdonoaqlaomsusflqmfcxrltvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102244.3967876-777-103782176474952/AnsiballZ_file.py'
Nov 25 20:24:04 compute-0 sudo[236687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:04 compute-0 podman[236647]: 2025-11-25 20:24:04.85407703 +0000 UTC m=+0.143014773 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 20:24:04 compute-0 python3.9[236696]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:05 compute-0 sudo[236687]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v574: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:05 compute-0 ceph-mon[75144]: pgmap v574: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:05 compute-0 sudo[236853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvzjaqapgxlpdgzboifkalwtkjxjduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102245.1572936-777-183740231902558/AnsiballZ_file.py'
Nov 25 20:24:05 compute-0 sudo[236853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:05 compute-0 python3.9[236855]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:05 compute-0 sudo[236853]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:06 compute-0 sudo[237005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnhravolmzyvcjteoyhbkurrmueuyze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102245.8813505-777-129629630283333/AnsiballZ_file.py'
Nov 25 20:24:06 compute-0 sudo[237005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:06 compute-0 python3.9[237007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:06 compute-0 sudo[237005]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:06 compute-0 sudo[237157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzmwekbvybzpervbjsitxmirvrcyaelr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102246.575904-834-160118910700287/AnsiballZ_file.py'
Nov 25 20:24:06 compute-0 sudo[237157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v575: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:07 compute-0 python3.9[237159]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:07 compute-0 sudo[237157]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:07 compute-0 sudo[237309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfsfgyudqoxdhoczjrvpyutjvbzmwciq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102247.3821957-834-200060617617773/AnsiballZ_file.py'
Nov 25 20:24:07 compute-0 sudo[237309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:07 compute-0 python3.9[237311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:07 compute-0 sudo[237309]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:08 compute-0 ceph-mon[75144]: pgmap v575: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:08 compute-0 sudo[237461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odbvxbjnyfeiphczncjzkzuadhdruhln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102248.089137-834-225614399365382/AnsiballZ_file.py'
Nov 25 20:24:08 compute-0 sudo[237461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:08 compute-0 python3.9[237463]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:08 compute-0 sudo[237461]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:09 compute-0 sudo[237613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jukzvzwdtgkllkajcaxcjhtytxgcccpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102248.8268828-834-52282141380197/AnsiballZ_file.py'
Nov 25 20:24:09 compute-0 sudo[237613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v576: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:09 compute-0 python3.9[237615]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:09 compute-0 sudo[237613]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:09 compute-0 ceph-mon[75144]: pgmap v576: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:09 compute-0 sudo[237765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpjvjykdbwpdjbiacucbdqnffkmdtrkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102249.4937272-834-37542091310993/AnsiballZ_file.py'
Nov 25 20:24:09 compute-0 sudo[237765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:10 compute-0 python3.9[237767]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:10 compute-0 sudo[237765]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:10 compute-0 sudo[237917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgktavtbkficzihehexblakfquerzfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102250.2430382-834-154682024633339/AnsiballZ_file.py'
Nov 25 20:24:10 compute-0 sudo[237917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:10 compute-0 python3.9[237919]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:10 compute-0 sudo[237917]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v577: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:11 compute-0 sudo[238069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxpxzxshbotwyotztpyzckexsxhtlvyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102250.987581-834-103340538874667/AnsiballZ_file.py'
Nov 25 20:24:11 compute-0 sudo[238069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:11 compute-0 python3.9[238071]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:11 compute-0 sudo[238069]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:12 compute-0 sudo[238221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yctdkpmynlzcincrfhkwrjcwlktvkgkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102251.7272868-834-260498140475999/AnsiballZ_file.py'
Nov 25 20:24:12 compute-0 sudo[238221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:12 compute-0 ceph-mon[75144]: pgmap v577: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:12 compute-0 python3.9[238223]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:12 compute-0 sudo[238221]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:13 compute-0 sudo[238373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejlfksfcuibrbtgzusmexurpqigpiiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102252.641169-892-83192739196236/AnsiballZ_command.py'
Nov 25 20:24:13 compute-0 sudo[238373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v578: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:13 compute-0 python3.9[238375]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:13 compute-0 sudo[238373]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:14 compute-0 python3.9[238527]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:24:14 compute-0 ceph-mon[75144]: pgmap v578: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:14 compute-0 sudo[238677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdeepyrctcpbnjihkorhyvazpiwrayid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102254.5932264-910-74607293513794/AnsiballZ_systemd_service.py'
Nov 25 20:24:14 compute-0 sudo[238677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v579: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:15 compute-0 python3.9[238679]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:24:15 compute-0 systemd[1]: Reloading.
Nov 25 20:24:15 compute-0 systemd-sysv-generator[238711]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:24:15 compute-0 systemd-rc-local-generator[238707]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:24:15 compute-0 sudo[238677]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:16 compute-0 sudo[238863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgyrsunehpkuvxbqtkebnhoasmwmixk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102255.8685584-918-190205989323537/AnsiballZ_command.py'
Nov 25 20:24:16 compute-0 sudo[238863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:16 compute-0 python3.9[238865]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:16 compute-0 ceph-mon[75144]: pgmap v579: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:16 compute-0 sudo[238863]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:16 compute-0 sudo[239016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wponqesbpgpmgpbapqbmopwhicppcxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102256.6691155-918-226184438877396/AnsiballZ_command.py'
Nov 25 20:24:16 compute-0 sudo[239016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:17 compute-0 python3.9[239018]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v580: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:17 compute-0 sudo[239016]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:17 compute-0 ceph-mon[75144]: pgmap v580: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:17 compute-0 sudo[239169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imotgohtkmvgyjzmviowcphkxypzdocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102257.3848784-918-99873894806448/AnsiballZ_command.py'
Nov 25 20:24:17 compute-0 sudo[239169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:18 compute-0 python3.9[239171]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:18 compute-0 sudo[239169]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:18 compute-0 sudo[239322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeyfmklnwemiqedpvtxgmnharddqfjzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102258.22772-918-76468179984599/AnsiballZ_command.py'
Nov 25 20:24:18 compute-0 sudo[239322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:18 compute-0 python3.9[239324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:18 compute-0 sudo[239322]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v581: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:19 compute-0 sudo[239475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfbepyegvtndcdgzhsfhtpredjapnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102258.9971936-918-131251046549460/AnsiballZ_command.py'
Nov 25 20:24:19 compute-0 sudo[239475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:19 compute-0 python3.9[239477]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:19 compute-0 sudo[239475]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:20 compute-0 sudo[239628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gonnypxznkcmfconyznxopevvizdataz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102259.7350037-918-263158659244228/AnsiballZ_command.py'
Nov 25 20:24:20 compute-0 sudo[239628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:20 compute-0 ceph-mon[75144]: pgmap v581: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:20 compute-0 python3.9[239630]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:20 compute-0 sudo[239628]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:20 compute-0 sudo[239781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pngxvkgzuwbbblwppujqmfltzkjnufii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102260.554291-918-125962932476397/AnsiballZ_command.py'
Nov 25 20:24:20 compute-0 sudo[239781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:21 compute-0 python3.9[239783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v582: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:21 compute-0 sudo[239781]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:21 compute-0 sudo[239935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfgwjnhupprjskfzabpcqolrlvjsgdwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102261.4004176-918-4288523661338/AnsiballZ_command.py'
Nov 25 20:24:21 compute-0 sudo[239935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:21 compute-0 python3.9[239937]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:24:21 compute-0 sudo[239935]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:22 compute-0 ceph-mon[75144]: pgmap v582: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v583: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:23 compute-0 sudo[240099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpdmtvshysaxmdypkjeobrfzzayiwrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102263.0975354-997-5786458691781/AnsiballZ_file.py'
Nov 25 20:24:23 compute-0 sudo[240099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:23 compute-0 podman[240062]: 2025-11-25 20:24:23.530611779 +0000 UTC m=+0.082024002 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:24:23 compute-0 python3.9[240109]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:23 compute-0 sudo[240099]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:24 compute-0 sudo[240259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfydoinxxknxqefnvtllliwopwcewkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102263.9458282-997-152558131618181/AnsiballZ_file.py'
Nov 25 20:24:24 compute-0 sudo[240259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:24 compute-0 ceph-mon[75144]: pgmap v583: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:24 compute-0 python3.9[240261]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:24 compute-0 sudo[240259]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:25 compute-0 sudo[240411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjzhddhgrqsacrloddqgnkvzwpkuztg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102264.697242-997-113623343185805/AnsiballZ_file.py'
Nov 25 20:24:25 compute-0 sudo[240411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v584: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:25 compute-0 python3.9[240413]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:25 compute-0 sudo[240411]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:25 compute-0 sudo[240563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niinzyaliwgkycdhxyomadlsdhylkajr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102265.5586095-1019-22536629185434/AnsiballZ_file.py'
Nov 25 20:24:25 compute-0 sudo[240563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:26 compute-0 python3.9[240565]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:26 compute-0 sudo[240563]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:26 compute-0 ceph-mon[75144]: pgmap v584: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:26 compute-0 sudo[240715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kridkzpodclxplwhhohaccroxdssiaxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102266.3267307-1019-275414366085863/AnsiballZ_file.py'
Nov 25 20:24:26 compute-0 sudo[240715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:24:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:24:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:24:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:24:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:24:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:24:26 compute-0 python3.9[240717]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:26 compute-0 sudo[240715]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v585: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:27 compute-0 sudo[240867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juwojvcboprxedcrgqemdutjzsyxynrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102267.116786-1019-255880588758694/AnsiballZ_file.py'
Nov 25 20:24:27 compute-0 sudo[240867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:27 compute-0 python3.9[240869]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:27 compute-0 sudo[240867]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:28 compute-0 sudo[241019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhkvayjhclrvipuorixthfthkmsfiwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102267.9291005-1019-176179858845470/AnsiballZ_file.py'
Nov 25 20:24:28 compute-0 ceph-mon[75144]: pgmap v585: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:28 compute-0 sudo[241019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:28 compute-0 python3.9[241021]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:28 compute-0 sudo[241019]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:29 compute-0 sudo[241171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secfvppurddjsgbjgfidxjgaoakqtqmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102268.7391372-1019-68366382035181/AnsiballZ_file.py'
Nov 25 20:24:29 compute-0 sudo[241171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v586: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:29 compute-0 python3.9[241173]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:29 compute-0 sudo[241171]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:29 compute-0 ceph-mon[75144]: pgmap v586: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:29 compute-0 sudo[241323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwgutdcsnvzzbhucyozepsyvwqgfiipl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102269.5261376-1019-214154649598433/AnsiballZ_file.py'
Nov 25 20:24:29 compute-0 sudo[241323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:30 compute-0 python3.9[241325]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:30 compute-0 sudo[241323]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:30 compute-0 podman[241326]: 2025-11-25 20:24:30.500622597 +0000 UTC m=+0.082293689 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:24:30 compute-0 sudo[241495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rospufprmjrnyrosyiyzihskcgsstmml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102270.622989-1019-89427222178337/AnsiballZ_file.py'
Nov 25 20:24:30 compute-0 sudo[241495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v587: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:31 compute-0 python3.9[241497]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:31 compute-0 sudo[241495]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:32 compute-0 ceph-mon[75144]: pgmap v587: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v588: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:34 compute-0 ceph-mon[75144]: pgmap v588: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:35 compute-0 podman[241522]: 2025-11-25 20:24:35.089475938 +0000 UTC m=+0.182946950 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 20:24:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v589: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:36 compute-0 ceph-mon[75144]: pgmap v589: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:36 compute-0 sudo[241674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrexteatqdytkntdyggmwgfskwffiyan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102276.2411354-1208-153808752499430/AnsiballZ_getent.py'
Nov 25 20:24:36 compute-0 sudo[241674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:36 compute-0 python3.9[241676]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 20:24:36 compute-0 sudo[241674]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v590: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:37 compute-0 sudo[241827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuwyvagiwdpqsixhcuimqazsglzljqlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102277.2309268-1216-142889916375499/AnsiballZ_group.py'
Nov 25 20:24:37 compute-0 sudo[241827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:38 compute-0 python3.9[241829]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 20:24:38 compute-0 groupadd[241830]: group added to /etc/group: name=nova, GID=42436
Nov 25 20:24:38 compute-0 groupadd[241830]: group added to /etc/gshadow: name=nova
Nov 25 20:24:38 compute-0 groupadd[241830]: new group: name=nova, GID=42436
Nov 25 20:24:38 compute-0 sudo[241827]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:38 compute-0 ceph-mon[75144]: pgmap v590: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:39 compute-0 sudo[241985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esmyomkwoefqsaalbpdhozdpknmpppib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102278.4367504-1224-272569095619469/AnsiballZ_user.py'
Nov 25 20:24:39 compute-0 sudo[241985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v591: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:39 compute-0 python3.9[241987]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 20:24:39 compute-0 useradd[241989]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 25 20:24:39 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:24:39 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:24:39 compute-0 useradd[241989]: add 'nova' to group 'libvirt'
Nov 25 20:24:39 compute-0 useradd[241989]: add 'nova' to shadow group 'libvirt'
Nov 25 20:24:39 compute-0 sudo[241985]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:40 compute-0 sshd-session[242021]: Accepted publickey for zuul from 192.168.122.30 port 53774 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:24:40 compute-0 systemd-logind[789]: New session 51 of user zuul.
Nov 25 20:24:40 compute-0 ceph-mon[75144]: pgmap v591: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:40 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 25 20:24:40 compute-0 sshd-session[242021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:24:40 compute-0 sshd-session[242024]: Received disconnect from 192.168.122.30 port 53774:11: disconnected by user
Nov 25 20:24:40 compute-0 sshd-session[242024]: Disconnected from user zuul 192.168.122.30 port 53774
Nov 25 20:24:40 compute-0 sshd-session[242021]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:24:40 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 20:24:40 compute-0 systemd-logind[789]: Session 51 logged out. Waiting for processes to exit.
Nov 25 20:24:40 compute-0 systemd-logind[789]: Removed session 51.
Nov 25 20:24:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v592: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:41 compute-0 python3.9[242174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:41 compute-0 ceph-mon[75144]: pgmap v592: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:42 compute-0 python3.9[242295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102280.7904098-1249-99986964078945/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:43 compute-0 sudo[242419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:43 compute-0 sudo[242419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:43 compute-0 sudo[242419]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:43 compute-0 sudo[242471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:24:43 compute-0 sudo[242471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:43 compute-0 sudo[242471]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v593: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:43 compute-0 sudo[242496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:43 compute-0 sudo[242496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:43 compute-0 sudo[242496]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:43 compute-0 python3.9[242469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:43 compute-0 sudo[242521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:24:43 compute-0 sudo[242521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:43 compute-0 python3.9[242635]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:43 compute-0 sudo[242521]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:24:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:24:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:24:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:24:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:24:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:24:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6b2e9ba0-300f-4319-a824-cc78d08a8a38 does not exist
Nov 25 20:24:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev f274e24f-8c9f-4644-8931-4f81171805b6 does not exist
Nov 25 20:24:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 48025e42-52b6-4017-a958-564e1e769929 does not exist
Nov 25 20:24:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:24:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:24:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:24:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:24:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:24:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:24:43 compute-0 sudo[242677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:43 compute-0 sudo[242677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:43 compute-0 sudo[242677]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:44 compute-0 sudo[242725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:24:44 compute-0 sudo[242725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:44 compute-0 sudo[242725]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:44 compute-0 sudo[242779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:44 compute-0 sudo[242779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:44 compute-0 sudo[242779]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:44 compute-0 sudo[242805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:24:44 compute-0 sudo[242805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:44 compute-0 ceph-mon[75144]: pgmap v593: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:24:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:24:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:24:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:24:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:24:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:24:44 compute-0 python3.9[242914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.526027547 +0000 UTC m=+0.066653360 container create c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:24:44 compute-0 systemd[1]: Started libpod-conmon-c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509.scope.
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.505783314 +0000 UTC m=+0.046409137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:24:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.612448836 +0000 UTC m=+0.153074669 container init c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.619651319 +0000 UTC m=+0.160277122 container start c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.622607778 +0000 UTC m=+0.163233581 container attach c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:24:44 compute-0 admiring_noether[242966]: 167 167
Nov 25 20:24:44 compute-0 systemd[1]: libpod-c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509.scope: Deactivated successfully.
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.627421618 +0000 UTC m=+0.168047431 container died c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef3537426aab40150e5400ab5d7ac48af51c426b1bc3ca288e469dd6fc62531-merged.mount: Deactivated successfully.
Nov 25 20:24:44 compute-0 podman[242942]: 2025-11-25 20:24:44.661964604 +0000 UTC m=+0.202590407 container remove c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:24:44 compute-0 systemd[1]: libpod-conmon-c6d92ef03425d2263da451af97a25660b01c8546b110bd92e065364791645509.scope: Deactivated successfully.
Nov 25 20:24:44 compute-0 podman[243068]: 2025-11-25 20:24:44.846743253 +0000 UTC m=+0.037042615 container create 72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:24:44 compute-0 systemd[1]: Started libpod-conmon-72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e.scope.
Nov 25 20:24:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149bb9f3248d5a60217b3748fac39a179972508b69f20e720669726742e02fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149bb9f3248d5a60217b3748fac39a179972508b69f20e720669726742e02fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149bb9f3248d5a60217b3748fac39a179972508b69f20e720669726742e02fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149bb9f3248d5a60217b3748fac39a179972508b69f20e720669726742e02fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149bb9f3248d5a60217b3748fac39a179972508b69f20e720669726742e02fa6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:44 compute-0 podman[243068]: 2025-11-25 20:24:44.918619352 +0000 UTC m=+0.108918734 container init 72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:24:44 compute-0 podman[243068]: 2025-11-25 20:24:44.832060869 +0000 UTC m=+0.022360271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:24:44 compute-0 podman[243068]: 2025-11-25 20:24:44.928700042 +0000 UTC m=+0.118999404 container start 72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:24:44 compute-0 podman[243068]: 2025-11-25 20:24:44.933011348 +0000 UTC m=+0.123310710 container attach 72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:24:45 compute-0 python3.9[243118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102283.9547844-1249-26205254463028/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v594: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:45 compute-0 python3.9[243278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:45 compute-0 clever_keldysh[243116]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:24:45 compute-0 clever_keldysh[243116]: --> relative data size: 1.0
Nov 25 20:24:45 compute-0 clever_keldysh[243116]: --> All data devices are unavailable
Nov 25 20:24:45 compute-0 systemd[1]: libpod-72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e.scope: Deactivated successfully.
Nov 25 20:24:45 compute-0 podman[243068]: 2025-11-25 20:24:45.96942817 +0000 UTC m=+1.159727572 container died 72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-149bb9f3248d5a60217b3748fac39a179972508b69f20e720669726742e02fa6-merged.mount: Deactivated successfully.
Nov 25 20:24:46 compute-0 podman[243068]: 2025-11-25 20:24:46.034696281 +0000 UTC m=+1.224995643 container remove 72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:24:46 compute-0 systemd[1]: libpod-conmon-72ff310418e658873b752924824073abab4c7ed76475a445843463b7d07a897e.scope: Deactivated successfully.
Nov 25 20:24:46 compute-0 sudo[242805]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:46 compute-0 sudo[243370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:46 compute-0 sudo[243370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:46 compute-0 sudo[243370]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:46 compute-0 sudo[243408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:24:46 compute-0 sudo[243408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:46 compute-0 sudo[243408]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:46 compute-0 ceph-mon[75144]: pgmap v594: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:46 compute-0 sudo[243464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:46 compute-0 sudo[243464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:46 compute-0 sudo[243464]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:46 compute-0 sudo[243506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:24:46 compute-0 sudo[243506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:46 compute-0 python3.9[243501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102285.263202-1249-237273464948886/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.742996598 +0000 UTC m=+0.052837829 container create 1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 20:24:46 compute-0 systemd[1]: Started libpod-conmon-1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce.scope.
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.717314469 +0000 UTC m=+0.027155750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:24:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.83770022 +0000 UTC m=+0.147541501 container init 1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.84516414 +0000 UTC m=+0.155005341 container start 1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.848257253 +0000 UTC m=+0.158098554 container attach 1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:24:46 compute-0 festive_wilson[243679]: 167 167
Nov 25 20:24:46 compute-0 systemd[1]: libpod-1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce.scope: Deactivated successfully.
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.851710286 +0000 UTC m=+0.161551477 container died 1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c18f5918746b1119c1c9d2db455ea20460c118f6523e3c2c7fb8d916824b208-merged.mount: Deactivated successfully.
Nov 25 20:24:46 compute-0 podman[243630]: 2025-11-25 20:24:46.888312728 +0000 UTC m=+0.198153909 container remove 1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:24:46 compute-0 systemd[1]: libpod-conmon-1e369ccb5fbf2ab0bbdca4e749ca134efcc233384e689243ad1b05403d1c41ce.scope: Deactivated successfully.
Nov 25 20:24:47 compute-0 podman[243760]: 2025-11-25 20:24:47.100361418 +0000 UTC m=+0.059482998 container create de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:24:47 compute-0 python3.9[243754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:47 compute-0 systemd[1]: Started libpod-conmon-de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7.scope.
Nov 25 20:24:47 compute-0 podman[243760]: 2025-11-25 20:24:47.082653773 +0000 UTC m=+0.041775363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:24:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4699d59a86f6ed358a3beea797c5be8f6ca12ff4d277a47ce95ffaae5ce0076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4699d59a86f6ed358a3beea797c5be8f6ca12ff4d277a47ce95ffaae5ce0076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4699d59a86f6ed358a3beea797c5be8f6ca12ff4d277a47ce95ffaae5ce0076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4699d59a86f6ed358a3beea797c5be8f6ca12ff4d277a47ce95ffaae5ce0076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v595: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:47 compute-0 podman[243760]: 2025-11-25 20:24:47.206934228 +0000 UTC m=+0.166055818 container init de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:24:47 compute-0 podman[243760]: 2025-11-25 20:24:47.22006341 +0000 UTC m=+0.179184970 container start de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:24:47 compute-0 podman[243760]: 2025-11-25 20:24:47.223514053 +0000 UTC m=+0.182635643 container attach de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:24:47 compute-0 python3.9[243902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102286.6368787-1249-50069155555949/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]: {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:     "0": [
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:         {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "devices": [
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "/dev/loop3"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             ],
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_name": "ceph_lv0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_size": "21470642176",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "name": "ceph_lv0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "tags": {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cluster_name": "ceph",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.crush_device_class": "",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.encrypted": "0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osd_id": "0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.type": "block",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.vdo": "0"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             },
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "type": "block",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "vg_name": "ceph_vg0"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:         }
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:     ],
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:     "1": [
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:         {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "devices": [
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "/dev/loop4"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             ],
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_name": "ceph_lv1",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_size": "21470642176",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "name": "ceph_lv1",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "tags": {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cluster_name": "ceph",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.crush_device_class": "",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.encrypted": "0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osd_id": "1",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.type": "block",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.vdo": "0"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             },
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "type": "block",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "vg_name": "ceph_vg1"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:         }
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:     ],
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:     "2": [
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:         {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "devices": [
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "/dev/loop5"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             ],
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_name": "ceph_lv2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_size": "21470642176",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "name": "ceph_lv2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "tags": {
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.cluster_name": "ceph",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.crush_device_class": "",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.encrypted": "0",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osd_id": "2",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.type": "block",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:                 "ceph.vdo": "0"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             },
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "type": "block",
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:             "vg_name": "ceph_vg2"
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:         }
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]:     ]
Nov 25 20:24:47 compute-0 quirky_keldysh[243777]: }
Nov 25 20:24:48 compute-0 systemd[1]: libpod-de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7.scope: Deactivated successfully.
Nov 25 20:24:48 compute-0 podman[243760]: 2025-11-25 20:24:48.014092788 +0000 UTC m=+0.973214398 container died de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4699d59a86f6ed358a3beea797c5be8f6ca12ff4d277a47ce95ffaae5ce0076-merged.mount: Deactivated successfully.
Nov 25 20:24:48 compute-0 podman[243760]: 2025-11-25 20:24:48.089499342 +0000 UTC m=+1.048620922 container remove de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:24:48 compute-0 systemd[1]: libpod-conmon-de42302a6fb9ed12e734a7ca46f9b90c103e5ca76c36e0d5d8eff19c1e5cc4a7.scope: Deactivated successfully.
Nov 25 20:24:48 compute-0 sudo[243506]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:48 compute-0 sudo[243997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:48 compute-0 sudo[243997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:48 compute-0 sudo[243997]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:48 compute-0 sudo[244045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:24:48 compute-0 sudo[244045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:48 compute-0 sudo[244045]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:48 compute-0 ceph-mon[75144]: pgmap v595: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:48 compute-0 sudo[244091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:48 compute-0 sudo[244091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:48 compute-0 sudo[244091]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:48 compute-0 sudo[244145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:24:48 compute-0 sudo[244145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:48 compute-0 python3.9[244147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.852825715 +0000 UTC m=+0.064535433 container create 8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:24:48 compute-0 systemd[1]: Started libpod-conmon-8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907.scope.
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.828529064 +0000 UTC m=+0.040238792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:24:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.941476304 +0000 UTC m=+0.153186072 container init 8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:24:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:24:48.941 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:24:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:24:48.941 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:24:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:24:48.941 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.947886946 +0000 UTC m=+0.159596634 container start 8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.951838142 +0000 UTC m=+0.163547820 container attach 8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_robinson, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:24:48 compute-0 inspiring_robinson[244296]: 167 167
Nov 25 20:24:48 compute-0 systemd[1]: libpod-8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907.scope: Deactivated successfully.
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.953726402 +0000 UTC m=+0.165436090 container died 8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ead5f173ccf3516c9f5f8b917bca4194d951569b0fe2566223b0531b4440be8-merged.mount: Deactivated successfully.
Nov 25 20:24:48 compute-0 podman[244252]: 2025-11-25 20:24:48.991291781 +0000 UTC m=+0.203001459 container remove 8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:24:49 compute-0 systemd[1]: libpod-conmon-8bc8c8cfc49deab62ae4f3c9abe563bd9bd93658750d07af8b8b669247ca7907.scope: Deactivated successfully.
Nov 25 20:24:49 compute-0 podman[244372]: 2025-11-25 20:24:49.186330484 +0000 UTC m=+0.050146836 container create 608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:24:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v596: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:49 compute-0 python3.9[244366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102288.0077584-1249-231996891437147/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:49 compute-0 systemd[1]: Started libpod-conmon-608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288.scope.
Nov 25 20:24:49 compute-0 podman[244372]: 2025-11-25 20:24:49.168184808 +0000 UTC m=+0.032001250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:24:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddc5e7a8ea7ba952d233b6ad62aa45befd3789405af57ab28e9bfc16c99029b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddc5e7a8ea7ba952d233b6ad62aa45befd3789405af57ab28e9bfc16c99029b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddc5e7a8ea7ba952d233b6ad62aa45befd3789405af57ab28e9bfc16c99029b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddc5e7a8ea7ba952d233b6ad62aa45befd3789405af57ab28e9bfc16c99029b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:24:49 compute-0 podman[244372]: 2025-11-25 20:24:49.298608687 +0000 UTC m=+0.162425109 container init 608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:24:49 compute-0 podman[244372]: 2025-11-25 20:24:49.304350351 +0000 UTC m=+0.168166703 container start 608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:24:49 compute-0 podman[244372]: 2025-11-25 20:24:49.307597549 +0000 UTC m=+0.171413941 container attach 608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:24:49 compute-0 sudo[244542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpjmcvnfzxlbxaabejbjwjhmbbowecp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102289.5662158-1332-133925260955754/AnsiballZ_file.py'
Nov 25 20:24:49 compute-0 sudo[244542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:50 compute-0 python3.9[244544]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:50 compute-0 sudo[244542]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:50 compute-0 ceph-mon[75144]: pgmap v596: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]: {
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "osd_id": 2,
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "type": "bluestore"
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:     },
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "osd_id": 1,
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "type": "bluestore"
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:     },
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "osd_id": 0,
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:         "type": "bluestore"
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]:     }
Nov 25 20:24:50 compute-0 sweet_nightingale[244388]: }
Nov 25 20:24:50 compute-0 systemd[1]: libpod-608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288.scope: Deactivated successfully.
Nov 25 20:24:50 compute-0 podman[244372]: 2025-11-25 20:24:50.364218483 +0000 UTC m=+1.228034845 container died 608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:24:50 compute-0 systemd[1]: libpod-608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288.scope: Consumed 1.057s CPU time.
Nov 25 20:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bddc5e7a8ea7ba952d233b6ad62aa45befd3789405af57ab28e9bfc16c99029b-merged.mount: Deactivated successfully.
Nov 25 20:24:50 compute-0 podman[244372]: 2025-11-25 20:24:50.438404583 +0000 UTC m=+1.302220945 container remove 608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:24:50 compute-0 systemd[1]: libpod-conmon-608048e166b718e1dc42e6383e0acab5f7df652b5a472aad5f362dda7a154288.scope: Deactivated successfully.
Nov 25 20:24:50 compute-0 sudo[244145]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:24:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:24:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:24:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:24:50 compute-0 sudo[244699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:24:50 compute-0 sudo[244699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:50 compute-0 sudo[244699]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:50 compute-0 sudo[244768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpjthhfxejpiiejblocxlxcdppokgiir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102290.2640922-1340-231385897881490/AnsiballZ_copy.py'
Nov 25 20:24:50 compute-0 sudo[244768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:50 compute-0 sudo[244748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:24:50 compute-0 sudo[244748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:24:50 compute-0 sudo[244748]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:50 compute-0 python3.9[244782]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:24:50 compute-0 sudo[244768]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v597: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:51 compute-0 sudo[244934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnzyzbdexpajxnqotrujvngwuytouds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102291.0357366-1348-125961794777785/AnsiballZ_stat.py'
Nov 25 20:24:51 compute-0 sudo[244934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:51 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:24:51 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:24:51 compute-0 ceph-mon[75144]: pgmap v597: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:51 compute-0 python3.9[244936]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:24:51 compute-0 sudo[244934]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:52 compute-0 sudo[245086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxkbkdlbfccgcxwwyagsdylhknrlmkwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102291.9488049-1356-252723474145837/AnsiballZ_stat.py'
Nov 25 20:24:52 compute-0 sudo[245086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:52 compute-0 python3.9[245088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:52 compute-0 sudo[245086]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:53 compute-0 sudo[245209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miytzcrhnbjglevnkvjrpnyvakowokse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102291.9488049-1356-252723474145837/AnsiballZ_copy.py'
Nov 25 20:24:53 compute-0 sudo[245209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v598: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:53 compute-0 python3.9[245211]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764102291.9488049-1356-252723474145837/.source _original_basename=.cnwaq_1_ follow=False checksum=0070216d163489694e5a90207e59d59bd5939096 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 20:24:53 compute-0 sudo[245209]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:54 compute-0 podman[245337]: 2025-11-25 20:24:54.010088135 +0000 UTC m=+0.092412474 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 20:24:54 compute-0 python3.9[245373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:24:54 compute-0 ceph-mon[75144]: pgmap v598: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:55 compute-0 python3.9[245535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:24:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v599: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:55 compute-0 python3.9[245656]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102294.4607635-1382-142476707795554/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:56 compute-0 ceph-mon[75144]: pgmap v599: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:56 compute-0 python3.9[245806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:24:56
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'vms', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 25 20:24:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:24:57 compute-0 python3.9[245927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764102295.8461504-1397-96432473433627/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:24:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v600: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:57 compute-0 sudo[246077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfewxnfhumytdbudxtioiatnzacbnccs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102297.341195-1414-121164714165026/AnsiballZ_container_config_data.py'
Nov 25 20:24:57 compute-0 sudo[246077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:57 compute-0 python3.9[246079]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 20:24:57 compute-0 sudo[246077]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:58 compute-0 ceph-mon[75144]: pgmap v600: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:58 compute-0 sudo[246229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aanolmsvvqprtjrnlpezxpgixrpidodd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102298.152863-1423-192135178128863/AnsiballZ_container_config_hash.py'
Nov 25 20:24:58 compute-0 sudo[246229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:58 compute-0 python3.9[246231]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:24:58 compute-0 sudo[246229]: pam_unix(sudo:session): session closed for user root
Nov 25 20:24:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v601: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:24:59 compute-0 sudo[246381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpyihpzrelotjfltcxpjgvpwtwsmcmy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764102299.020459-1433-274749637188062/AnsiballZ_edpm_container_manage.py'
Nov 25 20:24:59 compute-0 sudo[246381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:24:59 compute-0 python3[246383]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:25:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:00 compute-0 ceph-mon[75144]: pgmap v601: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:00 compute-0 podman[246421]: 2025-11-25 20:25:00.950329522 +0000 UTC m=+0.053176997 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 20:25:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v602: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:25:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:25:02 compute-0 ceph-mon[75144]: pgmap v602: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v603: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:03 compute-0 ceph-mon[75144]: pgmap v603: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v604: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:06 compute-0 ceph-mon[75144]: pgmap v604: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v605: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:08 compute-0 ceph-mon[75144]: pgmap v605: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:08 compute-0 podman[246473]: 2025-11-25 20:25:08.979367382 +0000 UTC m=+3.754082550 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:25:09 compute-0 podman[246398]: 2025-11-25 20:25:09.002229996 +0000 UTC m=+9.225734189 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 20:25:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v606: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:09 compute-0 podman[246525]: 2025-11-25 20:25:09.236297224 +0000 UTC m=+0.073727150 container create 9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 25 20:25:09 compute-0 podman[246525]: 2025-11-25 20:25:09.196528323 +0000 UTC m=+0.033958309 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 20:25:09 compute-0 python3[246383]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 20:25:09 compute-0 sudo[246381]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:09 compute-0 ceph-mon[75144]: pgmap v606: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:10 compute-0 sudo[246713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjpnrydxmxmphuffxwjjhqvrhwktbrwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102309.6672914-1441-206838140592863/AnsiballZ_stat.py'
Nov 25 20:25:10 compute-0 sudo[246713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:10 compute-0 python3.9[246715]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:25:10 compute-0 sudo[246713]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:25:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 2968 writes, 12K keys, 2968 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 2968 writes, 2968 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1272 writes, 5290 keys, 1272 commit groups, 1.0 writes per commit group, ingest: 5.57 MB, 0.01 MB/s
                                           Interval WAL: 1272 writes, 1272 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    101.4      0.10              0.04         6    0.016       0      0       0.0       0.0
                                             L6      1/0    4.24 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3    151.6    122.2      0.19              0.12         5    0.037     16K   2280       0.0       0.0
                                            Sum      1/0    4.24 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3    100.1    115.1      0.28              0.16        11    0.026     16K   2280       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.3    108.5    111.5      0.16              0.09         6    0.027     10K   1502       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    151.6    122.2      0.19              0.12         5    0.037     16K   2280       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    104.2      0.09              0.04         5    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.004
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.02 MB/s read, 0.3 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585aba031f0#2 capacity: 308.00 MB usage: 1.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(77,1.10 MB,0.356095%) FilterBlock(12,52.61 KB,0.0166806%) IndexBlock(12,100.02 KB,0.0317115%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:25:11 compute-0 sudo[246867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaiqllyjfzuwyjsmdgiwoqmhnzjlnoto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102310.7673264-1453-6193193468315/AnsiballZ_container_config_data.py'
Nov 25 20:25:11 compute-0 sudo[246867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v607: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:11 compute-0 python3.9[246869]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 20:25:11 compute-0 sudo[246867]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:12 compute-0 sudo[247019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkzeelkqznazhfyxqsmcuftkuyuhcwkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102311.7009199-1462-40964006791859/AnsiballZ_container_config_hash.py'
Nov 25 20:25:12 compute-0 sudo[247019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:12 compute-0 ceph-mon[75144]: pgmap v607: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:12 compute-0 python3.9[247021]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:25:12 compute-0 sudo[247019]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:13 compute-0 sudo[247171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzzsbosweyfwmbdwhaqlvjftvapvbhgq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764102312.7115283-1472-257357020460200/AnsiballZ_edpm_container_manage.py'
Nov 25 20:25:13 compute-0 sudo[247171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v608: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:13 compute-0 python3[247173]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:25:13 compute-0 podman[247210]: 2025-11-25 20:25:13.587992774 +0000 UTC m=+0.049697436 container create 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:25:13 compute-0 podman[247210]: 2025-11-25 20:25:13.560756193 +0000 UTC m=+0.022460865 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 20:25:13 compute-0 python3[247173]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 25 20:25:13 compute-0 sudo[247171]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:14 compute-0 sudo[247398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdbxqdylrsaugijenxbrxeusbrvxjksf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102313.931682-1480-272510481077759/AnsiballZ_stat.py'
Nov 25 20:25:14 compute-0 sudo[247398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:14 compute-0 ceph-mon[75144]: pgmap v608: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:14 compute-0 python3.9[247400]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:25:14 compute-0 sudo[247398]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:15 compute-0 sudo[247552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsyxsvfdwwytxscuqxedpqswjodncpuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102314.8075366-1489-181452155334811/AnsiballZ_file.py'
Nov 25 20:25:15 compute-0 sudo[247552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v609: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:15 compute-0 python3.9[247554]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:25:15 compute-0 sudo[247552]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:15 compute-0 ceph-mon[75144]: pgmap v609: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:15 compute-0 sudo[247703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmxwpegzuhgmjwwycjscwjjrmeuwroso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102315.434873-1489-137660040924424/AnsiballZ_copy.py'
Nov 25 20:25:15 compute-0 sudo[247703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:16 compute-0 python3.9[247705]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764102315.434873-1489-137660040924424/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:25:16 compute-0 sudo[247703]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:16 compute-0 sudo[247779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyahfewdtlqqulicgxmlflkjxjhtoebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102315.434873-1489-137660040924424/AnsiballZ_systemd.py'
Nov 25 20:25:16 compute-0 sudo[247779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:16 compute-0 python3.9[247781]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:25:16 compute-0 systemd[1]: Reloading.
Nov 25 20:25:16 compute-0 systemd-rc-local-generator[247810]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:25:16 compute-0 systemd-sysv-generator[247814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:25:17 compute-0 sudo[247779]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v610: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:17 compute-0 sudo[247891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypwvwsawuajuzyfitjxqlmlhvgoprivz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102315.434873-1489-137660040924424/AnsiballZ_systemd.py'
Nov 25 20:25:17 compute-0 sudo[247891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:17 compute-0 python3.9[247893]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:25:17 compute-0 systemd[1]: Reloading.
Nov 25 20:25:17 compute-0 systemd-sysv-generator[247924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:25:17 compute-0 systemd-rc-local-generator[247921]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:25:18 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 20:25:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:18 compute-0 ceph-mon[75144]: pgmap v610: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:18 compute-0 podman[247933]: 2025-11-25 20:25:18.272019699 +0000 UTC m=+0.102785568 container init 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:25:18 compute-0 podman[247933]: 2025-11-25 20:25:18.280790531 +0000 UTC m=+0.111556380 container start 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:25:18 compute-0 podman[247933]: nova_compute
Nov 25 20:25:18 compute-0 nova_compute[247949]: + sudo -E kolla_set_configs
Nov 25 20:25:18 compute-0 systemd[1]: Started nova_compute container.
Nov 25 20:25:18 compute-0 sudo[247891]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Validating config file
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying service configuration files
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Deleting /etc/ceph
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Creating directory /etc/ceph
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Writing out command to execute
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:18 compute-0 nova_compute[247949]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 20:25:18 compute-0 nova_compute[247949]: ++ cat /run_command
Nov 25 20:25:18 compute-0 nova_compute[247949]: + CMD=nova-compute
Nov 25 20:25:18 compute-0 nova_compute[247949]: + ARGS=
Nov 25 20:25:18 compute-0 nova_compute[247949]: + sudo kolla_copy_cacerts
Nov 25 20:25:18 compute-0 nova_compute[247949]: + [[ ! -n '' ]]
Nov 25 20:25:18 compute-0 nova_compute[247949]: + . kolla_extend_start
Nov 25 20:25:18 compute-0 nova_compute[247949]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 20:25:18 compute-0 nova_compute[247949]: Running command: 'nova-compute'
Nov 25 20:25:18 compute-0 nova_compute[247949]: + umask 0022
Nov 25 20:25:18 compute-0 nova_compute[247949]: + exec nova-compute
Nov 25 20:25:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v611: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:19 compute-0 python3.9[248110]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:25:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:20 compute-0 python3.9[248261]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:25:20 compute-0 ceph-mon[75144]: pgmap v611: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:20 compute-0 python3.9[248411]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:25:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v612: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:21 compute-0 ceph-mon[75144]: pgmap v612: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:21 compute-0 sudo[248563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvcvasiexhklsqhjtntvnagatlmsnldv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102321.2499466-1549-144688464632094/AnsiballZ_podman_container.py'
Nov 25 20:25:21 compute-0 sudo[248563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:21 compute-0 nova_compute[247949]: 2025-11-25 20:25:21.919 247953 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 20:25:21 compute-0 nova_compute[247949]: 2025-11-25 20:25:21.920 247953 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 20:25:21 compute-0 nova_compute[247949]: 2025-11-25 20:25:21.920 247953 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 20:25:21 compute-0 nova_compute[247949]: 2025-11-25 20:25:21.920 247953 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 20:25:22 compute-0 python3.9[248565]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 20:25:22 compute-0 nova_compute[247949]: 2025-11-25 20:25:22.179 247953 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:25:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:25:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:25:22 compute-0 nova_compute[247949]: 2025-11-25 20:25:22.231 247953 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:25:22 compute-0 nova_compute[247949]: 2025-11-25 20:25:22.232 247953 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 20:25:22 compute-0 sudo[248563]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:22 compute-0 nova_compute[247949]: 2025-11-25 20:25:22.842 247953 INFO nova.virt.driver [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 20:25:22 compute-0 sudo[248739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdcsqbjxbhhfprjvyeoowksfwnbrmeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102322.542963-1557-110851829915851/AnsiballZ_systemd.py'
Nov 25 20:25:23 compute-0 sudo[248739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v613: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.226 247953 INFO nova.compute.provider_config [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.247 247953 DEBUG oslo_concurrency.lockutils [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.248 247953 DEBUG oslo_concurrency.lockutils [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.248 247953 DEBUG oslo_concurrency.lockutils [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.249 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.249 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.249 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.249 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.250 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.250 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.250 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.250 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.250 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.251 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.251 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.251 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.251 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.251 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.252 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.252 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.252 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.252 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.252 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.253 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.253 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.253 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.253 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.253 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.254 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.254 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.254 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.254 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.254 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.255 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.255 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.255 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.255 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.255 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.256 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.256 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.256 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.256 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.256 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.257 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.257 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.257 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.257 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.257 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.258 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.258 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.258 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.258 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.258 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.259 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.259 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.259 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.259 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.259 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.260 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.260 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.260 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.260 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.261 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.261 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.261 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.261 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.261 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.261 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.262 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.262 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.262 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.262 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.262 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.263 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.263 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.263 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.263 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.263 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.264 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.264 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.264 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.264 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.264 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.264 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.265 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.265 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.265 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.265 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.266 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.266 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.266 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.266 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.267 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.267 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.267 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.267 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.267 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.268 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.268 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.268 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.268 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.269 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.269 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.269 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.269 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.269 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.269 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.270 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.270 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.270 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.270 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.270 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.271 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.271 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.271 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.271 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.271 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.272 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.272 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.272 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.272 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.273 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.273 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.273 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.273 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.273 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.274 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.274 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.274 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.274 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.274 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.274 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.275 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.275 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.275 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.275 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.275 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.276 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.276 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.276 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.276 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.276 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.276 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.277 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.277 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.277 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.277 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.277 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.278 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.278 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.278 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.278 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.278 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.279 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.279 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.279 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.279 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.279 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.280 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.280 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.280 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.280 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.280 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.281 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.281 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.281 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.281 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.281 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.282 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.282 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.282 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.282 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.282 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.282 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.283 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.283 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.283 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.283 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.283 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.284 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.284 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.284 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.284 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.284 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.285 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.285 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.285 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.285 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.285 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.286 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.286 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.286 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.286 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.286 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.287 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.287 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.287 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.287 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.287 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.288 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.288 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.288 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.288 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.288 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.289 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.289 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.289 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.289 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.290 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.290 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.290 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.290 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.290 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.291 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.291 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.291 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.291 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.291 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.292 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.292 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.292 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.292 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.292 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.292 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.293 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.293 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.293 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.293 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.293 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.294 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.294 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.294 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.294 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.294 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.295 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.295 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.295 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.295 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.295 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.296 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.296 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.296 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.296 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.297 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.297 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.297 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.297 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.297 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.298 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.299 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.300 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.301 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.302 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.302 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.302 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.302 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.302 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.302 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.303 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.304 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.305 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.305 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.305 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.305 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.306 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.306 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.306 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.306 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.306 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.307 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.307 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.307 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.307 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.307 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.307 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.308 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.309 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.309 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.309 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.309 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.309 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.310 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.311 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.311 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.311 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.311 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.311 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.311 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.312 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.313 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.313 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.313 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.313 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.313 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.313 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.314 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.315 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.315 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.315 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.315 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.315 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.316 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.316 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.316 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.316 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.316 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.316 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 python3.9[248741]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.317 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.318 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.318 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.318 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.318 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.318 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.318 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.319 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.320 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.321 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.322 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.323 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.324 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.325 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.326 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.327 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.328 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.328 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.328 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.328 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.328 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.328 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.329 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.330 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.331 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.331 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.331 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.331 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.331 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.331 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.332 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.333 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.333 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.333 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.333 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.333 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.333 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.334 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.335 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.335 247953 WARNING oslo_config.cfg [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 20:25:23 compute-0 nova_compute[247949]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 20:25:23 compute-0 nova_compute[247949]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 20:25:23 compute-0 nova_compute[247949]: and ``live_migration_inbound_addr`` respectively.
Nov 25 20:25:23 compute-0 nova_compute[247949]: ).  Its value may be silently ignored in the future.
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.335 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.335 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.335 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.335 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.336 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.336 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.336 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.336 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.336 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.336 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.337 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.337 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.337 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.337 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.337 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.337 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.338 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.338 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.338 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rbd_secret_uuid        = 712dd110-763a-5547-8ef7-acda1414fdce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.338 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.339 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.339 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.339 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.339 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.339 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.339 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.340 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.340 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.340 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.340 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.340 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.341 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.342 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.343 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.343 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.343 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.343 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.343 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.343 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.344 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.344 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.344 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.344 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.344 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.344 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.345 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.345 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.345 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.345 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.345 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.345 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.346 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.347 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.348 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.349 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.349 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.349 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.349 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.349 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.349 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.350 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.351 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.351 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.351 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.351 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.351 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.351 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.352 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.352 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.352 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.352 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.352 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.352 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.353 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.353 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.353 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.353 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.353 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.353 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.354 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.354 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.354 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.354 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.354 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.354 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.355 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.355 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.355 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.355 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.355 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.355 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.356 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.356 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.356 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.356 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.356 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.356 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.357 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.357 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.357 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.357 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.357 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.358 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.358 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.358 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.358 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.358 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.358 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.359 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.359 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.359 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.359 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.359 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.359 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.360 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.360 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.360 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.360 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.360 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.360 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.361 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.361 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.361 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.361 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.361 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.362 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.362 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.362 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.362 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.362 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.362 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.363 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.364 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.364 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.364 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.364 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.364 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.364 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.365 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.365 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.365 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.365 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.365 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.365 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.366 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.367 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.367 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.367 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.367 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.367 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.368 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.369 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.369 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.369 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.369 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.369 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.369 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.370 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.370 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.370 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.370 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.370 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.370 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.371 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.371 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.371 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.371 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.371 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.371 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.372 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.372 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.372 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.372 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.372 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.372 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.373 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.373 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.373 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.373 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.373 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.374 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.374 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.374 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.374 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.374 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.374 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.375 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.375 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.375 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.375 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.375 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.376 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.376 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.376 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.376 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.376 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.377 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.377 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.377 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.377 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.377 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.377 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.378 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.379 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.379 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.379 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.379 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.379 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.379 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.380 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.380 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.380 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.380 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.380 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.380 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.381 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.381 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.381 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.381 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.381 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.381 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.382 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.382 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.382 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.382 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.382 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.382 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.383 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.383 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.383 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.383 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.383 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.383 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.384 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.385 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.385 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.385 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.385 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.385 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.385 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.386 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.386 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.386 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.386 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.386 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.386 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.387 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.387 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 systemd[1]: Stopping nova_compute container...
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.387 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.387 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.387 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.387 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.388 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.388 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.388 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.388 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.388 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.388 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.389 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.389 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.389 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.389 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.389 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.389 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.390 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.391 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.391 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.391 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.391 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.391 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.391 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.392 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.393 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.394 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.395 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.396 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.396 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.396 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.396 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.396 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.396 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.397 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.398 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.399 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.400 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.401 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.401 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.401 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.401 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.401 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.401 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.402 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.403 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.403 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.403 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.403 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.403 247953 DEBUG oslo_service.service [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.404 247953 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.456 247953 DEBUG nova.virt.libvirt.host [None req-0e199a1e-6dd4-4475-9b34-c7f03ff9cc29 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.457 247953 DEBUG nova.virt.libvirt.host [None req-0e199a1e-6dd4-4475-9b34-c7f03ff9cc29 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.457 247953 DEBUG nova.virt.libvirt.host [None req-0e199a1e-6dd4-4475-9b34-c7f03ff9cc29 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.457 247953 DEBUG nova.virt.libvirt.host [None req-0e199a1e-6dd4-4475-9b34-c7f03ff9cc29 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 20:25:23 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.486 247953 DEBUG oslo_concurrency.lockutils [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.486 247953 DEBUG oslo_concurrency.lockutils [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 20:25:23 compute-0 nova_compute[247949]: 2025-11-25 20:25:23.487 247953 DEBUG oslo_concurrency.lockutils [None req-236ffcc8-afb7-48be-bedb-dd351890f533 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 20:25:23 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 20:25:24 compute-0 systemd[1]: libpod-37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7.scope: Deactivated successfully.
Nov 25 20:25:24 compute-0 systemd[1]: libpod-37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7.scope: Consumed 3.349s CPU time.
Nov 25 20:25:24 compute-0 podman[248745]: 2025-11-25 20:25:24.05122717 +0000 UTC m=+0.649744849 container died 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118)
Nov 25 20:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7-userdata-shm.mount: Deactivated successfully.
Nov 25 20:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93-merged.mount: Deactivated successfully.
Nov 25 20:25:24 compute-0 podman[248803]: 2025-11-25 20:25:24.16963341 +0000 UTC m=+0.087668618 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 20:25:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v614: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:26 compute-0 ceph-mon[75144]: pgmap v613: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:25:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:25:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:25:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:25:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:25:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:25:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v615: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:28 compute-0 ceph-mon[75144]: pgmap v614: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:28 compute-0 ceph-mon[75144]: pgmap v615: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:28 compute-0 podman[248745]: 2025-11-25 20:25:28.513140753 +0000 UTC m=+5.111658382 container cleanup 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:25:28 compute-0 podman[248745]: nova_compute
Nov 25 20:25:28 compute-0 podman[248838]: nova_compute
Nov 25 20:25:28 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 20:25:28 compute-0 systemd[1]: Stopped nova_compute container.
Nov 25 20:25:28 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 20:25:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9fe2d71d96f7b4618b39f705cc104d6cf62e4eb2bcbb8b995050928a538c93/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:28 compute-0 podman[248851]: 2025-11-25 20:25:28.775697263 +0000 UTC m=+0.117530588 container init 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:25:28 compute-0 podman[248851]: 2025-11-25 20:25:28.791776159 +0000 UTC m=+0.133609514 container start 37ce1cf74fa38f33d73f5a06f8961d730dfa919fcc1547aca21756158e9077f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:25:28 compute-0 podman[248851]: nova_compute
Nov 25 20:25:28 compute-0 nova_compute[248866]: + sudo -E kolla_set_configs
Nov 25 20:25:28 compute-0 systemd[1]: Started nova_compute container.
Nov 25 20:25:28 compute-0 sudo[248739]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Validating config file
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying service configuration files
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /etc/ceph
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Creating directory /etc/ceph
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Writing out command to execute
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:28 compute-0 nova_compute[248866]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 20:25:28 compute-0 nova_compute[248866]: ++ cat /run_command
Nov 25 20:25:28 compute-0 nova_compute[248866]: + CMD=nova-compute
Nov 25 20:25:28 compute-0 nova_compute[248866]: + ARGS=
Nov 25 20:25:28 compute-0 nova_compute[248866]: + sudo kolla_copy_cacerts
Nov 25 20:25:28 compute-0 nova_compute[248866]: + [[ ! -n '' ]]
Nov 25 20:25:28 compute-0 nova_compute[248866]: + . kolla_extend_start
Nov 25 20:25:28 compute-0 nova_compute[248866]: Running command: 'nova-compute'
Nov 25 20:25:28 compute-0 nova_compute[248866]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 20:25:28 compute-0 nova_compute[248866]: + umask 0022
Nov 25 20:25:28 compute-0 nova_compute[248866]: + exec nova-compute
Nov 25 20:25:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v616: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:29 compute-0 sudo[249027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjquqlpjlozalewzrturagmakgkmovfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764102329.1180089-1566-91012882498234/AnsiballZ_podman_container.py'
Nov 25 20:25:29 compute-0 sudo[249027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:25:29 compute-0 ceph-mon[75144]: pgmap v616: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:29 compute-0 python3.9[249029]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 20:25:29 compute-0 systemd[1]: Started libpod-conmon-9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce.scope.
Nov 25 20:25:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc0166b4f47d12cc7136ee67cbcfb849d40995487a466ecdb4acc612368cc9/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc0166b4f47d12cc7136ee67cbcfb849d40995487a466ecdb4acc612368cc9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc0166b4f47d12cc7136ee67cbcfb849d40995487a466ecdb4acc612368cc9/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:29 compute-0 podman[249055]: 2025-11-25 20:25:29.957036806 +0000 UTC m=+0.132416482 container init 9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0)
Nov 25 20:25:29 compute-0 podman[249055]: 2025-11-25 20:25:29.969881716 +0000 UTC m=+0.145261372 container start 9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Nov 25 20:25:29 compute-0 python3.9[249029]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 20:25:30 compute-0 nova_compute_init[249076]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 20:25:30 compute-0 systemd[1]: libpod-9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce.scope: Deactivated successfully.
Nov 25 20:25:30 compute-0 podman[249090]: 2025-11-25 20:25:30.08052789 +0000 UTC m=+0.025383112 container died 9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce-userdata-shm.mount: Deactivated successfully.
Nov 25 20:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-44bc0166b4f47d12cc7136ee67cbcfb849d40995487a466ecdb4acc612368cc9-merged.mount: Deactivated successfully.
Nov 25 20:25:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:30 compute-0 podman[249090]: 2025-11-25 20:25:30.299290674 +0000 UTC m=+0.244145906 container cleanup 9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Nov 25 20:25:30 compute-0 systemd[1]: libpod-conmon-9665e4aaf06ffd7441575ab019335b61d55be9231c95a603db7970479f9aa6ce.scope: Deactivated successfully.
Nov 25 20:25:30 compute-0 nova_compute[248866]: 2025-11-25 20:25:30.899 248870 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 20:25:30 compute-0 nova_compute[248866]: 2025-11-25 20:25:30.900 248870 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 20:25:30 compute-0 nova_compute[248866]: 2025-11-25 20:25:30.900 248870 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 20:25:30 compute-0 nova_compute[248866]: 2025-11-25 20:25:30.900 248870 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 20:25:31 compute-0 sudo[249027]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.019 248870 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.031 248870 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.031 248870 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 20:25:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v617: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.485 248870 INFO nova.virt.driver [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 20:25:31 compute-0 sshd-session[217998]: Connection closed by 192.168.122.30 port 55912
Nov 25 20:25:31 compute-0 sshd-session[217995]: pam_unix(sshd:session): session closed for user zuul
Nov 25 20:25:31 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 20:25:31 compute-0 systemd[1]: session-50.scope: Consumed 2min 40.397s CPU time.
Nov 25 20:25:31 compute-0 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Nov 25 20:25:31 compute-0 systemd-logind[789]: Removed session 50.
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.655 248870 INFO nova.compute.provider_config [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 20:25:31 compute-0 podman[249146]: 2025-11-25 20:25:31.672129959 +0000 UTC m=+0.118747840 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.674 248870 DEBUG oslo_concurrency.lockutils [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.675 248870 DEBUG oslo_concurrency.lockutils [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.675 248870 DEBUG oslo_concurrency.lockutils [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.675 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.675 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.676 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.677 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.678 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.678 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.678 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.678 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.678 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.678 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.679 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.679 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.679 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.679 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.679 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.679 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.680 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.681 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.681 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.681 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.681 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.681 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.681 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.682 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.683 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.683 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.683 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.683 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.683 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.683 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.684 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.685 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.686 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.686 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.686 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.686 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.686 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.687 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.687 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.687 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.687 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.687 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.687 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.688 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.689 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.690 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.690 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.690 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.690 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.690 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.690 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.691 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.692 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.693 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.694 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.694 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.694 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.694 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.694 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.694 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.695 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.696 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.697 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.697 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.697 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.697 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.697 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.697 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.698 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.698 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.698 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.698 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.698 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.698 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.699 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.700 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.701 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.702 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.702 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.702 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.702 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.702 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.702 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.703 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.703 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.703 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.703 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.703 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.703 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.704 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.705 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.705 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.705 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.705 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.705 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.706 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.706 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.706 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.706 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.706 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.706 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.707 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.707 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.707 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.707 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.707 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.707 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.708 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.708 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.708 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.708 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.708 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.709 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.709 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.709 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.709 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.709 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.709 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.710 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.710 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.710 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.710 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.710 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.711 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.711 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.711 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.711 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.711 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.712 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.712 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.712 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.712 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.712 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.712 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.713 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.714 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.714 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.714 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.714 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.714 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.714 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.715 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.716 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.717 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.718 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.718 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.718 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.718 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.718 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.718 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.719 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.719 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.719 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.719 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.719 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.719 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.720 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.721 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.721 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.721 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.721 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.721 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.721 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.722 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.722 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.722 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.722 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.722 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.722 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.723 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.723 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.723 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.723 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.723 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.723 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.724 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.725 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.726 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.726 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.726 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.726 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.726 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.726 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.727 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.727 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.727 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.727 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.727 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.727 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.728 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.729 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.730 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.730 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.730 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.730 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.730 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.730 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.731 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.731 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.731 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.731 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.731 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.731 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.732 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.733 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.734 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.734 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.734 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.734 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.734 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.734 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.735 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.736 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.736 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.736 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.736 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.736 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.736 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.737 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.738 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.738 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.738 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.738 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.738 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.739 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.739 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.739 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.739 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.739 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.740 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.740 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.740 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.740 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.740 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.740 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.741 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.742 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.742 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.742 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.742 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.742 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.742 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.743 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.743 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.743 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.743 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.743 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.743 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.744 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.744 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.744 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.744 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.744 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.744 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.745 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.745 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.745 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.745 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.746 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.746 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.746 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.746 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.746 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.747 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.747 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.747 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.747 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.747 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.747 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.748 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.748 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.748 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.748 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.749 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.749 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.749 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.749 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.749 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.750 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.751 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.751 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.751 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.751 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.751 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.751 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.752 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.752 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.752 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.752 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.752 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.752 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.753 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.753 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.753 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.753 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.753 248870 WARNING oslo_config.cfg [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 20:25:31 compute-0 nova_compute[248866]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 20:25:31 compute-0 nova_compute[248866]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 20:25:31 compute-0 nova_compute[248866]: and ``live_migration_inbound_addr`` respectively.
Nov 25 20:25:31 compute-0 nova_compute[248866]: ).  Its value may be silently ignored in the future.
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.754 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.754 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.754 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.754 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.754 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.755 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.755 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.755 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.755 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.755 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.755 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.756 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.756 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.756 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.756 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.756 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.756 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.757 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.757 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rbd_secret_uuid        = 712dd110-763a-5547-8ef7-acda1414fdce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.757 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.757 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.757 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.757 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.758 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.759 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.759 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.759 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.759 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.759 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.759 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.760 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.761 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.762 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.762 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.762 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.762 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.762 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.762 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.763 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.764 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.765 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.765 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.765 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.765 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.765 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.766 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.766 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.766 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.766 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.766 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.766 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.767 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.767 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.767 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.767 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.768 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.768 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.768 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.768 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.768 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.769 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.769 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.769 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.769 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.769 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.770 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.770 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.770 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.770 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.770 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.771 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.771 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.771 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.771 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.771 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.771 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.772 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.772 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.772 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.772 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.772 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.773 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.773 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.773 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.773 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.773 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.774 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.774 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.774 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.774 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.774 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.774 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.775 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.775 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.775 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.775 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.775 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.776 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.776 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.776 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.776 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.776 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.777 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.777 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.777 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.777 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.777 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.777 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.778 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.778 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.778 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.779 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.779 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.779 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.779 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.779 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.780 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.780 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.780 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.780 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.780 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.781 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.781 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.781 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.781 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.781 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.782 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.782 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.782 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.782 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.782 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.783 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.783 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.783 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.783 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.784 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.784 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.784 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.784 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.784 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.785 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.785 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.785 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.785 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.785 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.786 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.786 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.786 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.786 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.786 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.787 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.787 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.787 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.787 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.787 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.788 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.788 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.788 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.788 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.788 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.789 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.789 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.789 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.789 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.790 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.790 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.790 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.790 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.791 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.791 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.791 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.791 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.791 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.792 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.792 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.792 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.792 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.792 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.793 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.793 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.793 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.793 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.793 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.794 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.794 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.794 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.794 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.794 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.795 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.795 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.795 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.795 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.795 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.796 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.796 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.796 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.796 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.796 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.796 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.797 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.798 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.799 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.799 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.799 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.799 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.799 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.799 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.800 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.800 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.800 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.800 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.800 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.800 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.801 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.802 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.803 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.803 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.803 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.803 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.803 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.803 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.804 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.805 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.806 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.806 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.806 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.806 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.806 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.806 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.807 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.808 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.809 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.810 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.811 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.811 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.811 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.811 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.811 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.811 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.812 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.813 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.814 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.815 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.816 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.817 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.818 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.818 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.818 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.818 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.818 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.818 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.819 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.820 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.821 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.821 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.821 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.821 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.821 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.822 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.822 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.822 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.822 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.822 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.822 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.823 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.824 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.825 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.825 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.825 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.825 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.825 248870 DEBUG oslo_service.service [None req-2e63fdc5-4b15-4515-85d8-5545d4905877 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.826 248870 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.841 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.842 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.842 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.842 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.855 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1f34be72b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.858 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1f34be72b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.858 248870 INFO nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Connection event '1' reason 'None'
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.880 248870 WARNING nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 20:25:31 compute-0 nova_compute[248866]: 2025-11-25 20:25:31.880 248870 DEBUG nova.virt.libvirt.volume.mount [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 20:25:32 compute-0 ceph-mon[75144]: pgmap v617: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:32 compute-0 nova_compute[248866]: 2025-11-25 20:25:32.917 248870 INFO nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 20:25:32 compute-0 nova_compute[248866]: 
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <host>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <uuid>ee007d13-5173-4e64-8d3e-c554c682b054</uuid>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <cpu>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <arch>x86_64</arch>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model>EPYC-Rome-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <vendor>AMD</vendor>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <microcode version='16777317'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <signature family='23' model='49' stepping='0'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='x2apic'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='tsc-deadline'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='osxsave'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='hypervisor'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='tsc_adjust'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='spec-ctrl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='stibp'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='arch-capabilities'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='cmp_legacy'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='topoext'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='virt-ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='lbrv'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='tsc-scale'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='vmcb-clean'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='pause-filter'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='pfthreshold'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='svme-addr-chk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='rdctl-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='mds-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature name='pschange-mc-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <pages unit='KiB' size='4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <pages unit='KiB' size='2048'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <pages unit='KiB' size='1048576'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </cpu>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <power_management>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <suspend_mem/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </power_management>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <iommu support='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <migration_features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <live/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <uri_transports>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <uri_transport>tcp</uri_transport>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <uri_transport>rdma</uri_transport>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </uri_transports>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </migration_features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <topology>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <cells num='1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <cell id='0'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           <memory unit='KiB'>7864324</memory>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           <distances>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <sibling id='0' value='10'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           </distances>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           <cpus num='8'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:           </cpus>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         </cell>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </cells>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </topology>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <cache>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </cache>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <secmodel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model>selinux</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <doi>0</doi>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </secmodel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <secmodel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model>dac</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <doi>0</doi>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </secmodel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </host>
Nov 25 20:25:32 compute-0 nova_compute[248866]: 
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <guest>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <os_type>hvm</os_type>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <arch name='i686'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <wordsize>32</wordsize>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <domain type='qemu'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <domain type='kvm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </arch>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <pae/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <nonpae/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <acpi default='on' toggle='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <apic default='on' toggle='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <cpuselection/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <deviceboot/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <disksnapshot default='on' toggle='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <externalSnapshot/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </guest>
Nov 25 20:25:32 compute-0 nova_compute[248866]: 
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <guest>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <os_type>hvm</os_type>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <arch name='x86_64'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <wordsize>64</wordsize>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <domain type='qemu'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <domain type='kvm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </arch>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <acpi default='on' toggle='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <apic default='on' toggle='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <cpuselection/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <deviceboot/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <disksnapshot default='on' toggle='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <externalSnapshot/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </guest>
Nov 25 20:25:32 compute-0 nova_compute[248866]: 
Nov 25 20:25:32 compute-0 nova_compute[248866]: </capabilities>
Nov 25 20:25:32 compute-0 nova_compute[248866]: 
Nov 25 20:25:32 compute-0 nova_compute[248866]: 2025-11-25 20:25:32.922 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 20:25:32 compute-0 nova_compute[248866]: 2025-11-25 20:25:32.955 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 20:25:32 compute-0 nova_compute[248866]: <domainCapabilities>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <domain>kvm</domain>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <arch>i686</arch>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <vcpu max='4096'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <iothreads supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <os supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <enum name='firmware'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <loader supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>rom</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>pflash</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='readonly'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>yes</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='secure'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </loader>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </os>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <cpu>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='host-passthrough' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='hostPassthroughMigratable'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='maximum' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='maximumMigratable'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='host-model' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <vendor>AMD</vendor>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='x2apic'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='hypervisor'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='stibp'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='overflow-recov'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='succor'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='ibrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='lbrv'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-scale'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='flushbyasid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='pause-filter'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='pfthreshold'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='disable' name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='custom' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cooperlake'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Denverton'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Denverton-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Denverton-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Denverton-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Dhyana-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='EPYC-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx10'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx10-128'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx10-256'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx10-512'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Haswell-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v5'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v6'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v7'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='IvyBridge'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='KnightsMill'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='KnightsMill-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='SierraForest'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='SierraForest-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v5'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Snowridge'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='athlon'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='athlon-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='core2duo'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='core2duo-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='coreduo'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='coreduo-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='n270'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='n270-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='phenom'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='phenom-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </cpu>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <memoryBacking supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <enum name='sourceType'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <value>file</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <value>anonymous</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <value>memfd</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </memoryBacking>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <devices>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <disk supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='diskDevice'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>disk</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>cdrom</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>floppy</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>lun</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>fdc</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>sata</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </disk>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <graphics supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>vnc</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>egl-headless</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </graphics>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <video supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='modelType'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>vga</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>cirrus</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>none</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>bochs</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>ramfb</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </video>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <hostdev supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='mode'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>subsystem</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='startupPolicy'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>mandatory</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>requisite</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>optional</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='subsysType'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>pci</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='capsType'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='pciBackend'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </hostdev>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <rng supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>random</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>egd</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </rng>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <filesystem supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='driverType'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>path</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>handle</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>virtiofs</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </filesystem>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <tpm supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>tpm-tis</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>tpm-crb</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>emulator</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>external</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='backendVersion'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>2.0</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </tpm>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <redirdev supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </redirdev>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <channel supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </channel>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <crypto supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='model'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>qemu</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </crypto>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <interface supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='backendType'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>passt</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </interface>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <panic supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>isa</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>hyperv</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </panic>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <console supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>null</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>vc</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>dev</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>file</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>pipe</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>stdio</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>udp</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>tcp</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>qemu-vdagent</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </console>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </devices>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <features>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <gic supported='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <vmcoreinfo supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <genid supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <backingStoreInput supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <backup supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <async-teardown supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <ps2 supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <sev supported='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <sgx supported='no'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <hyperv supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='features'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>relaxed</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>vapic</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>spinlocks</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>vpindex</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>runtime</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>synic</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>stimer</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>reset</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>vendor_id</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>frequencies</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>reenlightenment</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>tlbflush</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>ipi</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>avic</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>emsr_bitmap</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>xmm_input</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <defaults>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <spinlocks>4095</spinlocks>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <stimer_direct>on</stimer_direct>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </defaults>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </hyperv>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <launchSecurity supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='sectype'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>tdx</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </launchSecurity>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </features>
Nov 25 20:25:32 compute-0 nova_compute[248866]: </domainCapabilities>
Nov 25 20:25:32 compute-0 nova_compute[248866]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 20:25:32 compute-0 nova_compute[248866]: 2025-11-25 20:25:32.962 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 20:25:32 compute-0 nova_compute[248866]: <domainCapabilities>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <domain>kvm</domain>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <arch>i686</arch>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <vcpu max='240'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <iothreads supported='yes'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <os supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <enum name='firmware'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <loader supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>rom</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>pflash</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='readonly'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>yes</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='secure'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </loader>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   </os>
Nov 25 20:25:32 compute-0 nova_compute[248866]:   <cpu>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='host-passthrough' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='hostPassthroughMigratable'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='maximum' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <enum name='maximumMigratable'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='host-model' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <vendor>AMD</vendor>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='x2apic'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='hypervisor'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='stibp'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='overflow-recov'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='succor'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='ibrs'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='lbrv'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-scale'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='flushbyasid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='pause-filter'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='pfthreshold'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <feature policy='disable' name='xsaves'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:32 compute-0 nova_compute[248866]:     <mode name='custom' supported='yes'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v2'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v3'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v4'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 20:25:32 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:32 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Dhyana-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-128'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-256'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-512'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v6'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v7'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='KnightsMill'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='KnightsMill-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SierraForest'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SierraForest-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='athlon'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='athlon-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='core2duo'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='core2duo-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='coreduo'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='coreduo-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='n270'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='n270-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='phenom'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='phenom-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </cpu>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <memoryBacking supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <enum name='sourceType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>file</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>anonymous</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>memfd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </memoryBacking>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <devices>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <disk supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='diskDevice'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>disk</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>cdrom</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>floppy</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>lun</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ide</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>fdc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>sata</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </disk>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <graphics supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vnc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>egl-headless</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </graphics>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <video supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='modelType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vga</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>cirrus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>none</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>bochs</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ramfb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </video>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <hostdev supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='mode'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>subsystem</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='startupPolicy'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>mandatory</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>requisite</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>optional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='subsysType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pci</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='capsType'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='pciBackend'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </hostdev>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <rng supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>random</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>egd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </rng>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <filesystem supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='driverType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>path</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>handle</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtiofs</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </filesystem>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <tpm supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tpm-tis</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tpm-crb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>emulator</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>external</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendVersion'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>2.0</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </tpm>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <redirdev supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </redirdev>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <channel supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </channel>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <crypto supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>qemu</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </crypto>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <interface supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>passt</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </interface>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <panic supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>isa</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>hyperv</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </panic>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <console supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>null</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dev</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>file</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pipe</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>stdio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>udp</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tcp</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>qemu-vdagent</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </console>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </devices>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <features>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <gic supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <vmcoreinfo supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <genid supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <backingStoreInput supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <backup supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <async-teardown supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <ps2 supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <sev supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <sgx supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <hyperv supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='features'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>relaxed</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vapic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>spinlocks</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vpindex</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>runtime</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>synic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>stimer</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>reset</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vendor_id</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>frequencies</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>reenlightenment</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tlbflush</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ipi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>avic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>emsr_bitmap</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>xmm_input</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <defaults>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <spinlocks>4095</spinlocks>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <stimer_direct>on</stimer_direct>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </defaults>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </hyperv>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <launchSecurity supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='sectype'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tdx</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </launchSecurity>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </features>
Nov 25 20:25:33 compute-0 nova_compute[248866]: </domainCapabilities>
Nov 25 20:25:33 compute-0 nova_compute[248866]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:32.986 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:32.991 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 20:25:33 compute-0 nova_compute[248866]: <domainCapabilities>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <domain>kvm</domain>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <arch>x86_64</arch>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <vcpu max='4096'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <iothreads supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <os supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <enum name='firmware'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>efi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <loader supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>rom</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pflash</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='readonly'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>yes</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='secure'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>yes</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </loader>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </os>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <cpu>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='host-passthrough' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='hostPassthroughMigratable'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='maximum' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='maximumMigratable'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='host-model' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <vendor>AMD</vendor>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='x2apic'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='hypervisor'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='stibp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='ssbd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='overflow-recov'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='succor'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='lbrv'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-scale'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='flushbyasid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='pause-filter'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='pfthreshold'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='disable' name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='custom' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Dhyana-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-128'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-256'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-512'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v6'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v7'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='KnightsMill'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='KnightsMill-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SierraForest'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SierraForest-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='athlon'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='athlon-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='core2duo'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='core2duo-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='coreduo'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='coreduo-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='n270'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='n270-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='phenom'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='phenom-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </cpu>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <memoryBacking supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <enum name='sourceType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>file</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>anonymous</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>memfd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </memoryBacking>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <devices>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <disk supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='diskDevice'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>disk</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>cdrom</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>floppy</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>lun</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>fdc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>sata</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </disk>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <graphics supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vnc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>egl-headless</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </graphics>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <video supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='modelType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vga</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>cirrus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>none</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>bochs</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ramfb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </video>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <hostdev supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='mode'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>subsystem</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='startupPolicy'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>mandatory</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>requisite</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>optional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='subsysType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pci</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='capsType'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='pciBackend'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </hostdev>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <rng supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>random</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>egd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </rng>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <filesystem supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='driverType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>path</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>handle</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtiofs</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </filesystem>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <tpm supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tpm-tis</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tpm-crb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>emulator</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>external</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendVersion'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>2.0</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </tpm>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <redirdev supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </redirdev>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <channel supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </channel>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <crypto supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>qemu</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </crypto>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <interface supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>passt</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </interface>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <panic supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>isa</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>hyperv</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </panic>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <console supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>null</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dev</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>file</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pipe</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>stdio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>udp</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tcp</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>qemu-vdagent</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </console>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </devices>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <features>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <gic supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <vmcoreinfo supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <genid supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <backingStoreInput supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <backup supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <async-teardown supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <ps2 supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <sev supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <sgx supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <hyperv supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='features'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>relaxed</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vapic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>spinlocks</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vpindex</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>runtime</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>synic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>stimer</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>reset</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vendor_id</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>frequencies</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>reenlightenment</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tlbflush</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ipi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>avic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>emsr_bitmap</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>xmm_input</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <defaults>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <spinlocks>4095</spinlocks>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <stimer_direct>on</stimer_direct>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </defaults>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </hyperv>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <launchSecurity supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='sectype'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tdx</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </launchSecurity>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </features>
Nov 25 20:25:33 compute-0 nova_compute[248866]: </domainCapabilities>
Nov 25 20:25:33 compute-0 nova_compute[248866]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.047 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 20:25:33 compute-0 nova_compute[248866]: <domainCapabilities>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <domain>kvm</domain>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <arch>x86_64</arch>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <vcpu max='240'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <iothreads supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <os supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <enum name='firmware'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <loader supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>rom</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pflash</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='readonly'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>yes</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='secure'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>no</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </loader>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </os>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <cpu>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='host-passthrough' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='hostPassthroughMigratable'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='maximum' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='maximumMigratable'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>on</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>off</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='host-model' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <vendor>AMD</vendor>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='x2apic'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='hypervisor'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='stibp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='ssbd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='overflow-recov'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='succor'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='lbrv'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='tsc-scale'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='flushbyasid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='pause-filter'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='pfthreshold'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <feature policy='disable' name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <mode name='custom' supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Broadwell-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Cooperlake-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Denverton-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Dhyana-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='auto-ibrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Milan-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amd-psfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='no-nested-data-bp'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='null-sel-clr-base'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='stibp-always-on'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-Rome-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='EPYC-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='GraniteRapids-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-128'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-256'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx10-512'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='prefetchiti'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Haswell-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v6'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Icelake-Server-v7'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='IvyBridge-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='KnightsMill'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='KnightsMill-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4fmaps'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-4vnniw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512er'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512pf'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G4-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Opteron_G5-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fma4'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tbm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xop'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SapphireRapids-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='amx-tile'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-bf16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-fp16'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512-vpopcntdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bitalg'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vbmi2'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrc'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fzrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='la57'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='taa-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='tsx-ldtrk'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xfd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SierraForest'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='SierraForest-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ifma'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-ne-convert'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx-vnni-int8'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='bus-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cmpccxadd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fbsdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='fsrs'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ibrs-all'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mcdt-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pbrsb-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='psdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='sbdr-ssdp-no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='serialize'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vaes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='vpclmulqdq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Client-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='hle'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='rtm'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Skylake-Server-v5'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512bw'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512cd'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512dq'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512f'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='avx512vl'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='invpcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pcid'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='pku'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='mpx'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v2'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v3'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='core-capability'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='split-lock-detect'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='Snowridge-v4'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='cldemote'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='erms'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='gfni'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdir64b'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='movdiri'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='xsaves'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='athlon'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='athlon-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='core2duo'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='core2duo-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='coreduo'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='coreduo-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='n270'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='n270-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='ss'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='phenom'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <blockers model='phenom-v1'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnow'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <feature name='3dnowext'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </blockers>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </mode>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </cpu>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <memoryBacking supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <enum name='sourceType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>file</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>anonymous</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <value>memfd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </memoryBacking>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <devices>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <disk supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='diskDevice'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>disk</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>cdrom</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>floppy</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>lun</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ide</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>fdc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>sata</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </disk>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <graphics supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vnc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>egl-headless</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </graphics>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <video supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='modelType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vga</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>cirrus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>none</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>bochs</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ramfb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </video>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <hostdev supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='mode'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>subsystem</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='startupPolicy'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>mandatory</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>requisite</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>optional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='subsysType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pci</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>scsi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='capsType'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='pciBackend'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </hostdev>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <rng supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtio-non-transitional</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>random</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>egd</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </rng>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <filesystem supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='driverType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>path</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>handle</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>virtiofs</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </filesystem>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <tpm supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tpm-tis</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tpm-crb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>emulator</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>external</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendVersion'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>2.0</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </tpm>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <redirdev supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='bus'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>usb</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </redirdev>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <channel supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </channel>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <crypto supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>qemu</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendModel'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>builtin</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </crypto>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <interface supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='backendType'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>default</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>passt</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </interface>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <panic supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='model'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>isa</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>hyperv</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </panic>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <console supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='type'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>null</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vc</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pty</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dev</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>file</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>pipe</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>stdio</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>udp</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tcp</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>unix</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>qemu-vdagent</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>dbus</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </console>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </devices>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   <features>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <gic supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <vmcoreinfo supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <genid supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <backingStoreInput supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <backup supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <async-teardown supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <ps2 supported='yes'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <sev supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <sgx supported='no'/>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <hyperv supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='features'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>relaxed</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vapic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>spinlocks</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vpindex</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>runtime</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>synic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>stimer</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>reset</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>vendor_id</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>frequencies</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>reenlightenment</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tlbflush</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>ipi</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>avic</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>emsr_bitmap</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>xmm_input</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <defaults>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <spinlocks>4095</spinlocks>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <stimer_direct>on</stimer_direct>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </defaults>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </hyperv>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     <launchSecurity supported='yes'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       <enum name='sectype'>
Nov 25 20:25:33 compute-0 nova_compute[248866]:         <value>tdx</value>
Nov 25 20:25:33 compute-0 nova_compute[248866]:       </enum>
Nov 25 20:25:33 compute-0 nova_compute[248866]:     </launchSecurity>
Nov 25 20:25:33 compute-0 nova_compute[248866]:   </features>
Nov 25 20:25:33 compute-0 nova_compute[248866]: </domainCapabilities>
Nov 25 20:25:33 compute-0 nova_compute[248866]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.104 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.104 248870 INFO nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Secure Boot support detected
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.106 248870 INFO nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.107 248870 INFO nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.115 248870 DEBUG nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.181 248870 INFO nova.virt.node [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Determined node identity 26ab8f11-6940-49dd-985d-e4f9e55b992f from /var/lib/nova/compute_id
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.203 248870 WARNING nova.compute.manager [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Compute nodes ['26ab8f11-6940-49dd-985d-e4f9e55b992f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 20:25:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v618: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.242 248870 INFO nova.compute.manager [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.304 248870 WARNING nova.compute.manager [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.304 248870 DEBUG oslo_concurrency.lockutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.305 248870 DEBUG oslo_concurrency.lockutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.305 248870 DEBUG oslo_concurrency.lockutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.305 248870 DEBUG nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.306 248870 DEBUG oslo_concurrency.processutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:25:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:25:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324461117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:25:33 compute-0 nova_compute[248866]: 2025-11-25 20:25:33.786 248870 DEBUG oslo_concurrency.processutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:25:33 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 20:25:33 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.331 248870 WARNING nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.333 248870 DEBUG nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5329MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.333 248870 DEBUG oslo_concurrency.lockutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.333 248870 DEBUG oslo_concurrency.lockutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.350 248870 WARNING nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] No compute node record for compute-0.ctlplane.example.com:26ab8f11-6940-49dd-985d-e4f9e55b992f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 26ab8f11-6940-49dd-985d-e4f9e55b992f could not be found.
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.369 248870 INFO nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 26ab8f11-6940-49dd-985d-e4f9e55b992f
Nov 25 20:25:34 compute-0 ceph-mon[75144]: pgmap v618: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:34 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1324461117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.474 248870 DEBUG nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:25:34 compute-0 nova_compute[248866]: 2025-11-25 20:25:34.475 248870 DEBUG nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:25:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v619: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:35 compute-0 nova_compute[248866]: 2025-11-25 20:25:35.341 248870 INFO nova.scheduler.client.report [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] [req-d1ecbdb9-628f-4ca7-8749-be0521326b33] Created resource provider record via placement API for resource provider with UUID 26ab8f11-6940-49dd-985d-e4f9e55b992f and name compute-0.ctlplane.example.com.
Nov 25 20:25:35 compute-0 nova_compute[248866]: 2025-11-25 20:25:35.775 248870 DEBUG oslo_concurrency.processutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:25:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:25:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878963451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.199 248870 DEBUG oslo_concurrency.processutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.204 248870 DEBUG nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 20:25:36 compute-0 nova_compute[248866]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.204 248870 INFO nova.virt.libvirt.host [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] kernel doesn't support AMD SEV
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.205 248870 DEBUG nova.compute.provider_tree [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.206 248870 DEBUG nova.virt.libvirt.driver [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.287 248870 DEBUG nova.scheduler.client.report [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Updated inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.288 248870 DEBUG nova.compute.provider_tree [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Updating resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.288 248870 DEBUG nova.compute.provider_tree [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.393 248870 DEBUG nova.compute.provider_tree [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Updating resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.426 248870 DEBUG nova.compute.resource_tracker [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.427 248870 DEBUG oslo_concurrency.lockutils [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.427 248870 DEBUG nova.service [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 25 20:25:36 compute-0 ceph-mon[75144]: pgmap v619: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:36 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2878963451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.524 248870 DEBUG nova.service [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 25 20:25:36 compute-0 nova_compute[248866]: 2025-11-25 20:25:36.525 248870 DEBUG nova.servicegroup.drivers.db [None req-6bc5345d-0e01-4a55-9953-8ac064f1a811 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 25 20:25:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v620: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:37 compute-0 ceph-mon[75144]: pgmap v620: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:38 compute-0 nova_compute[248866]: 2025-11-25 20:25:38.527 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:25:38 compute-0 nova_compute[248866]: 2025-11-25 20:25:38.550 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:25:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v621: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:40 compute-0 podman[249277]: 2025-11-25 20:25:40.055749814 +0000 UTC m=+0.144586594 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:25:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:40 compute-0 ceph-mon[75144]: pgmap v621: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v622: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:41 compute-0 ceph-mon[75144]: pgmap v622: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v623: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:44 compute-0 ceph-mon[75144]: pgmap v623: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v624: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:45 compute-0 ceph-mon[75144]: pgmap v624: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v625: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:48 compute-0 ceph-mon[75144]: pgmap v625: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:25:48.942 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:25:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:25:48.943 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:25:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:25:48.943 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:25:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v626: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:50 compute-0 ceph-mon[75144]: pgmap v626: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:50 compute-0 sudo[249303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:50 compute-0 sudo[249303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:50 compute-0 sudo[249303]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:50 compute-0 sudo[249328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:25:50 compute-0 sudo[249328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:50 compute-0 sudo[249328]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:50 compute-0 sudo[249353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:50 compute-0 sudo[249353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:50 compute-0 sudo[249353]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:51 compute-0 sudo[249378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:25:51 compute-0 sudo[249378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v627: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:51 compute-0 sudo[249378]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:25:51 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:25:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:25:51 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:25:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:25:51 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:25:51 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev a05237be-92b8-44f0-a1a9-6c64fbe532ae does not exist
Nov 25 20:25:51 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev cef8c436-033c-46ab-ba35-1b410d6fb14d does not exist
Nov 25 20:25:51 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 733eae82-97ed-4f34-9781-5a874c68b197 does not exist
Nov 25 20:25:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:25:51 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:25:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:25:51 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:25:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:25:51 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:25:51 compute-0 sudo[249435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:51 compute-0 sudo[249435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:51 compute-0 sudo[249435]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:51 compute-0 sudo[249460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:25:51 compute-0 sudo[249460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:51 compute-0 sudo[249460]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:52 compute-0 sudo[249485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:52 compute-0 sudo[249485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:52 compute-0 sudo[249485]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:52 compute-0 sudo[249510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:25:52 compute-0 sudo[249510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:52 compute-0 ceph-mon[75144]: pgmap v627: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:25:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:25:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:25:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:25:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:25:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.584728614 +0000 UTC m=+0.069544668 container create ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:25:52 compute-0 systemd[1]: Started libpod-conmon-ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db.scope.
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.559753075 +0000 UTC m=+0.044569149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:25:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.689506145 +0000 UTC m=+0.174322199 container init ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.702482428 +0000 UTC m=+0.187298472 container start ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.706438003 +0000 UTC m=+0.191254137 container attach ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:25:52 compute-0 lucid_wilson[249591]: 167 167
Nov 25 20:25:52 compute-0 systemd[1]: libpod-ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db.scope: Deactivated successfully.
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.71200396 +0000 UTC m=+0.196820004 container died ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0ed9845af6ca2c5f25b346bcac5aacc71f3ad26a7d90fcd0ec2504c6659d8e0-merged.mount: Deactivated successfully.
Nov 25 20:25:52 compute-0 podman[249574]: 2025-11-25 20:25:52.760638045 +0000 UTC m=+0.245454109 container remove ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:25:52 compute-0 systemd[1]: libpod-conmon-ac44efddf6fb33e2788c976dfb376cdd9fbe33b613e64651da4effb1c548b0db.scope: Deactivated successfully.
Nov 25 20:25:53 compute-0 podman[249615]: 2025-11-25 20:25:53.026556476 +0000 UTC m=+0.078452745 container create ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:25:53 compute-0 podman[249615]: 2025-11-25 20:25:52.987235616 +0000 UTC m=+0.039131935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:25:53 compute-0 systemd[1]: Started libpod-conmon-ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef.scope.
Nov 25 20:25:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v628: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8a9670367c70b7b9c4b575c82c14344b0252c8e1d5b8d20e4c7548b6b05cae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8a9670367c70b7b9c4b575c82c14344b0252c8e1d5b8d20e4c7548b6b05cae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8a9670367c70b7b9c4b575c82c14344b0252c8e1d5b8d20e4c7548b6b05cae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8a9670367c70b7b9c4b575c82c14344b0252c8e1d5b8d20e4c7548b6b05cae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8a9670367c70b7b9c4b575c82c14344b0252c8e1d5b8d20e4c7548b6b05cae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:53 compute-0 podman[249615]: 2025-11-25 20:25:53.31341776 +0000 UTC m=+0.365314109 container init ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:25:53 compute-0 podman[249615]: 2025-11-25 20:25:53.326127576 +0000 UTC m=+0.378023835 container start ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:25:53 compute-0 podman[249615]: 2025-11-25 20:25:53.36905965 +0000 UTC m=+0.420955969 container attach ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:25:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:25:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2960196126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:25:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2960196126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:25:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/224851206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:25:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/224851206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 vibrant_snyder[249631]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:25:54 compute-0 vibrant_snyder[249631]: --> relative data size: 1.0
Nov 25 20:25:54 compute-0 vibrant_snyder[249631]: --> All data devices are unavailable
Nov 25 20:25:54 compute-0 ceph-mon[75144]: pgmap v628: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:54 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2960196126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2960196126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 systemd[1]: libpod-ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef.scope: Deactivated successfully.
Nov 25 20:25:54 compute-0 systemd[1]: libpod-ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef.scope: Consumed 1.121s CPU time.
Nov 25 20:25:54 compute-0 podman[249615]: 2025-11-25 20:25:54.505509806 +0000 UTC m=+1.557406035 container died ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 25 20:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e8a9670367c70b7b9c4b575c82c14344b0252c8e1d5b8d20e4c7548b6b05cae-merged.mount: Deactivated successfully.
Nov 25 20:25:54 compute-0 podman[249615]: 2025-11-25 20:25:54.854103123 +0000 UTC m=+1.905999382 container remove ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:25:54 compute-0 systemd[1]: libpod-conmon-ac80ef9a64fea5239aac675487a8552bc5675934a94a166c60612f919b369cef.scope: Deactivated successfully.
Nov 25 20:25:54 compute-0 sudo[249510]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:25:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1527919360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:25:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1527919360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:25:54 compute-0 sudo[249674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:54 compute-0 sudo[249674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:54 compute-0 sudo[249674]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:55 compute-0 sudo[249699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:25:55 compute-0 sudo[249699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:55 compute-0 sudo[249699]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:55 compute-0 sudo[249724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:55 compute-0 sudo[249724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:55 compute-0 sudo[249724]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v629: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:25:55 compute-0 sudo[249749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:25:55 compute-0 sudo[249749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:55 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/224851206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:25:55 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/224851206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:25:55 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1527919360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:25:55 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1527919360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:25:55 compute-0 ceph-mon[75144]: pgmap v629: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:55 compute-0 podman[249814]: 2025-11-25 20:25:55.727370909 +0000 UTC m=+0.055821267 container create 8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dirac, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:25:55 compute-0 systemd[1]: Started libpod-conmon-8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e.scope.
Nov 25 20:25:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:55 compute-0 podman[249814]: 2025-11-25 20:25:55.700700605 +0000 UTC m=+0.029150993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:25:55 compute-0 podman[249814]: 2025-11-25 20:25:55.836784302 +0000 UTC m=+0.165234750 container init 8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:25:55 compute-0 podman[249814]: 2025-11-25 20:25:55.847590588 +0000 UTC m=+0.176040986 container start 8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:25:55 compute-0 fervent_dirac[249829]: 167 167
Nov 25 20:25:55 compute-0 systemd[1]: libpod-8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e.scope: Deactivated successfully.
Nov 25 20:25:55 compute-0 podman[249814]: 2025-11-25 20:25:55.860329855 +0000 UTC m=+0.188780213 container attach 8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:25:55 compute-0 podman[249814]: 2025-11-25 20:25:55.861148126 +0000 UTC m=+0.189598484 container died 8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-985d8fe018ff71e8925508e6e7896d3a13902933e4e9f29359fdb5cb08d0886d-merged.mount: Deactivated successfully.
Nov 25 20:25:56 compute-0 podman[249814]: 2025-11-25 20:25:56.064258007 +0000 UTC m=+0.392708375 container remove 8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:25:56 compute-0 systemd[1]: libpod-conmon-8a113cf91741e2baf7c9fa304be9497a10b398f72a50618d0b3ede2c45f78b9e.scope: Deactivated successfully.
Nov 25 20:25:56 compute-0 podman[249855]: 2025-11-25 20:25:56.315696223 +0000 UTC m=+0.096799699 container create 82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:25:56 compute-0 podman[249855]: 2025-11-25 20:25:56.257947337 +0000 UTC m=+0.039050853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:25:56 compute-0 systemd[1]: Started libpod-conmon-82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9.scope.
Nov 25 20:25:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee691b4f7078eff174ae2dfa2827b842fd81a52c6b74668861398408828c9081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee691b4f7078eff174ae2dfa2827b842fd81a52c6b74668861398408828c9081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee691b4f7078eff174ae2dfa2827b842fd81a52c6b74668861398408828c9081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee691b4f7078eff174ae2dfa2827b842fd81a52c6b74668861398408828c9081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:25:56 compute-0 podman[249855]: 2025-11-25 20:25:56.547207664 +0000 UTC m=+0.328311220 container init 82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:25:56 compute-0 podman[249855]: 2025-11-25 20:25:56.560607049 +0000 UTC m=+0.341710555 container start 82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:25:56 compute-0 podman[249855]: 2025-11-25 20:25:56.631568995 +0000 UTC m=+0.412672571 container attach 82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:25:56
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 25 20:25:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:25:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v630: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:57 compute-0 stoic_wing[249871]: {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:     "0": [
Nov 25 20:25:57 compute-0 stoic_wing[249871]:         {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "devices": [
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "/dev/loop3"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             ],
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_name": "ceph_lv0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_size": "21470642176",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "name": "ceph_lv0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "tags": {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cluster_name": "ceph",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.crush_device_class": "",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.encrypted": "0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osd_id": "0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.type": "block",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.vdo": "0"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             },
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "type": "block",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "vg_name": "ceph_vg0"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:         }
Nov 25 20:25:57 compute-0 stoic_wing[249871]:     ],
Nov 25 20:25:57 compute-0 stoic_wing[249871]:     "1": [
Nov 25 20:25:57 compute-0 stoic_wing[249871]:         {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "devices": [
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "/dev/loop4"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             ],
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_name": "ceph_lv1",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_size": "21470642176",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "name": "ceph_lv1",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "tags": {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cluster_name": "ceph",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.crush_device_class": "",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.encrypted": "0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osd_id": "1",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.type": "block",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.vdo": "0"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             },
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "type": "block",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "vg_name": "ceph_vg1"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:         }
Nov 25 20:25:57 compute-0 stoic_wing[249871]:     ],
Nov 25 20:25:57 compute-0 stoic_wing[249871]:     "2": [
Nov 25 20:25:57 compute-0 stoic_wing[249871]:         {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "devices": [
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "/dev/loop5"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             ],
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_name": "ceph_lv2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_size": "21470642176",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "name": "ceph_lv2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "tags": {
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.cluster_name": "ceph",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.crush_device_class": "",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.encrypted": "0",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osd_id": "2",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.type": "block",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:                 "ceph.vdo": "0"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             },
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "type": "block",
Nov 25 20:25:57 compute-0 stoic_wing[249871]:             "vg_name": "ceph_vg2"
Nov 25 20:25:57 compute-0 stoic_wing[249871]:         }
Nov 25 20:25:57 compute-0 stoic_wing[249871]:     ]
Nov 25 20:25:57 compute-0 stoic_wing[249871]: }
Nov 25 20:25:57 compute-0 systemd[1]: libpod-82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9.scope: Deactivated successfully.
Nov 25 20:25:57 compute-0 podman[249855]: 2025-11-25 20:25:57.481166376 +0000 UTC m=+1.262269922 container died 82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee691b4f7078eff174ae2dfa2827b842fd81a52c6b74668861398408828c9081-merged.mount: Deactivated successfully.
Nov 25 20:25:58 compute-0 podman[249855]: 2025-11-25 20:25:58.158143704 +0000 UTC m=+1.939247170 container remove 82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:25:58 compute-0 systemd[1]: libpod-conmon-82d93a17d7255885433a3c557a4973b9145feec6b06b34ba39976ac9ea489ae9.scope: Deactivated successfully.
Nov 25 20:25:58 compute-0 sudo[249749]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:58 compute-0 sudo[249894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:58 compute-0 sudo[249894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:58 compute-0 sudo[249894]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:58 compute-0 ceph-mon[75144]: pgmap v630: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:58 compute-0 sudo[249919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:25:58 compute-0 sudo[249919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:58 compute-0 sudo[249919]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:58 compute-0 sudo[249944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:25:58 compute-0 sudo[249944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:58 compute-0 sudo[249944]: pam_unix(sudo:session): session closed for user root
Nov 25 20:25:58 compute-0 sudo[249969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:25:58 compute-0 sudo[249969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:25:58 compute-0 podman[250036]: 2025-11-25 20:25:58.886687895 +0000 UTC m=+0.037114132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:25:59 compute-0 podman[250036]: 2025-11-25 20:25:59.047507686 +0000 UTC m=+0.197933903 container create 23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:25:59 compute-0 systemd[1]: Started libpod-conmon-23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387.scope.
Nov 25 20:25:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:25:59 compute-0 podman[250036]: 2025-11-25 20:25:59.167267172 +0000 UTC m=+0.317693369 container init 23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:25:59 compute-0 podman[250048]: 2025-11-25 20:25:59.168312931 +0000 UTC m=+0.252356004 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:25:59 compute-0 systemd[1]: libpod-23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387.scope: Deactivated successfully.
Nov 25 20:25:59 compute-0 podman[250036]: 2025-11-25 20:25:59.190309631 +0000 UTC m=+0.340735828 container start 23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:25:59 compute-0 heuristic_ramanujan[250063]: 167 167
Nov 25 20:25:59 compute-0 conmon[250063]: conmon 23cdc230361ab8269379 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387.scope/container/memory.events
Nov 25 20:25:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v631: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:25:59 compute-0 podman[250036]: 2025-11-25 20:25:59.434095716 +0000 UTC m=+0.584522003 container attach 23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:25:59 compute-0 podman[250036]: 2025-11-25 20:25:59.437670522 +0000 UTC m=+0.588096739 container died 23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:25:59 compute-0 ceph-mon[75144]: pgmap v631: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b03c6c345dfdacbb10913c22e2974dba843342af247bc3073c717942b33f4030-merged.mount: Deactivated successfully.
Nov 25 20:26:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:00 compute-0 podman[250036]: 2025-11-25 20:26:00.561001309 +0000 UTC m=+1.711427516 container remove 23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:26:00 compute-0 systemd[1]: libpod-conmon-23cdc230361ab82693794570c8e8896c66986035aaa1de32e292df36ac3a1387.scope: Deactivated successfully.
Nov 25 20:26:00 compute-0 podman[250093]: 2025-11-25 20:26:00.80536797 +0000 UTC m=+0.043215524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:26:01 compute-0 podman[250093]: 2025-11-25 20:26:01.066712289 +0000 UTC m=+0.304559753 container create fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:26:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v632: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:01 compute-0 systemd[1]: Started libpod-conmon-fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702.scope.
Nov 25 20:26:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e780ec5690a0ac107d017fb54a5d138049d2d1821b5e05ca8998aefe6a55c96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e780ec5690a0ac107d017fb54a5d138049d2d1821b5e05ca8998aefe6a55c96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e780ec5690a0ac107d017fb54a5d138049d2d1821b5e05ca8998aefe6a55c96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e780ec5690a0ac107d017fb54a5d138049d2d1821b5e05ca8998aefe6a55c96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:26:01 compute-0 podman[250093]: 2025-11-25 20:26:01.728044883 +0000 UTC m=+0.965892377 container init fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:26:01 compute-0 podman[250093]: 2025-11-25 20:26:01.73966449 +0000 UTC m=+0.977511994 container start fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:26:01 compute-0 ceph-mon[75144]: pgmap v632: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:01 compute-0 podman[250093]: 2025-11-25 20:26:01.884830648 +0000 UTC m=+1.122678192 container attach fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:26:02 compute-0 podman[250115]: 2025-11-25 20:26:02.003045264 +0000 UTC m=+0.088110091 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:26:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]: {
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "osd_id": 2,
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "type": "bluestore"
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:     },
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "osd_id": 1,
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "type": "bluestore"
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:     },
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "osd_id": 0,
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:         "type": "bluestore"
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]:     }
Nov 25 20:26:02 compute-0 naughty_hypatia[250110]: }
Nov 25 20:26:02 compute-0 systemd[1]: libpod-fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702.scope: Deactivated successfully.
Nov 25 20:26:02 compute-0 systemd[1]: libpod-fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702.scope: Consumed 1.132s CPU time.
Nov 25 20:26:02 compute-0 conmon[250110]: conmon fd1993aad8f1d719e299 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702.scope/container/memory.events
Nov 25 20:26:02 compute-0 podman[250093]: 2025-11-25 20:26:02.896603247 +0000 UTC m=+2.134450751 container died fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e780ec5690a0ac107d017fb54a5d138049d2d1821b5e05ca8998aefe6a55c96-merged.mount: Deactivated successfully.
Nov 25 20:26:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v633: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:03 compute-0 ceph-mon[75144]: pgmap v633: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:03 compute-0 podman[250093]: 2025-11-25 20:26:03.69743121 +0000 UTC m=+2.935278694 container remove fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:26:03 compute-0 sudo[249969]: pam_unix(sudo:session): session closed for user root
Nov 25 20:26:03 compute-0 systemd[1]: libpod-conmon-fd1993aad8f1d719e299cb89ae3d17835cb91e13d5dc1259142fab8407a90702.scope: Deactivated successfully.
Nov 25 20:26:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:26:03 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:26:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:26:03 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:26:03 compute-0 sudo[250175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:26:04 compute-0 sudo[250175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:26:04 compute-0 sudo[250175]: pam_unix(sudo:session): session closed for user root
Nov 25 20:26:04 compute-0 sudo[250200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:26:04 compute-0 sudo[250200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:26:04 compute-0 sudo[250200]: pam_unix(sudo:session): session closed for user root
Nov 25 20:26:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:26:04 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:26:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v634: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:06 compute-0 ceph-mon[75144]: pgmap v634: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v635: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:08 compute-0 ceph-mon[75144]: pgmap v635: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v636: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:09 compute-0 ceph-mon[75144]: pgmap v636: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:11 compute-0 podman[250225]: 2025-11-25 20:26:11.041761128 +0000 UTC m=+0.134655441 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:26:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v637: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:12 compute-0 ceph-mon[75144]: pgmap v637: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v638: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:13 compute-0 ceph-mon[75144]: pgmap v638: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v639: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.277890) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102375277975, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1904, "num_deletes": 505, "total_data_size": 1806979, "memory_usage": 1843760, "flush_reason": "Manual Compaction"}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102375309748, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1764229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11787, "largest_seqno": 13690, "table_properties": {"data_size": 1755957, "index_size": 4578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 19138, "raw_average_key_size": 18, "raw_value_size": 1737471, "raw_average_value_size": 1680, "num_data_blocks": 210, "num_entries": 1034, "num_filter_entries": 1034, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102190, "oldest_key_time": 1764102190, "file_creation_time": 1764102375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 32008 microseconds, and 6641 cpu microseconds.
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.309903) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1764229 bytes OK
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.309932) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.358536) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.358591) EVENT_LOG_v1 {"time_micros": 1764102375358577, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.358623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1797826, prev total WAL file size 1814578, number of live WAL files 2.
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.360114) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1722KB)], [32(4343KB)]
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102375360176, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 6212335, "oldest_snapshot_seqno": -1}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3101 keys, 4826245 bytes, temperature: kUnknown
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102375646455, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 4826245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4803778, "index_size": 13590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 73677, "raw_average_key_size": 23, "raw_value_size": 4746364, "raw_average_value_size": 1530, "num_data_blocks": 590, "num_entries": 3101, "num_filter_entries": 3101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.646927) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 4826245 bytes
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.686871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 21.7 rd, 16.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 4.2 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.7) OK, records in: 4124, records dropped: 1023 output_compression: NoCompression
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.686914) EVENT_LOG_v1 {"time_micros": 1764102375686898, "job": 14, "event": "compaction_finished", "compaction_time_micros": 286396, "compaction_time_cpu_micros": 24233, "output_level": 6, "num_output_files": 1, "total_output_size": 4826245, "num_input_records": 4124, "num_output_records": 3101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102375687554, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102375688549, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.359993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.688696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.688702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.688705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.688707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:26:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:26:15.688710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:26:16 compute-0 ceph-mon[75144]: pgmap v639: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v640: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:18 compute-0 ceph-mon[75144]: pgmap v640: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v641: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:20 compute-0 ceph-mon[75144]: pgmap v641: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v642: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:21 compute-0 ceph-mon[75144]: pgmap v642: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v643: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:24 compute-0 ceph-mon[75144]: pgmap v643: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v644: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:26 compute-0 ceph-mon[75144]: pgmap v644: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:26:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:26:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:26:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:26:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:26:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:26:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v645: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:27 compute-0 ceph-mon[75144]: pgmap v645: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v646: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:29 compute-0 podman[250251]: 2025-11-25 20:26:29.964980959 +0000 UTC m=+0.061060706 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:26:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:30 compute-0 ceph-mon[75144]: pgmap v646: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.045 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.046 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.046 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.046 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.060 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.060 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.063 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.063 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.096 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.097 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.097 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.098 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.099 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:26:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v647: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:26:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808521739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.567 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.812 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.814 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5329MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.815 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.815 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.939 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:26:31 compute-0 nova_compute[248866]: 2025-11-25 20:26:31.940 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:26:32 compute-0 nova_compute[248866]: 2025-11-25 20:26:32.003 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:26:32 compute-0 ceph-mon[75144]: pgmap v647: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:32 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2808521739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:26:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:26:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713971375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:26:32 compute-0 nova_compute[248866]: 2025-11-25 20:26:32.470 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:26:32 compute-0 nova_compute[248866]: 2025-11-25 20:26:32.477 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:26:32 compute-0 nova_compute[248866]: 2025-11-25 20:26:32.505 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:26:32 compute-0 nova_compute[248866]: 2025-11-25 20:26:32.557 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:26:32 compute-0 nova_compute[248866]: 2025-11-25 20:26:32.558 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:26:33 compute-0 podman[250315]: 2025-11-25 20:26:33.000249075 +0000 UTC m=+0.088799858 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:26:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v648: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:33 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2713971375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:26:34 compute-0 ceph-mon[75144]: pgmap v648: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v649: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:36 compute-0 ceph-mon[75144]: pgmap v649: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v650: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:38 compute-0 ceph-mon[75144]: pgmap v650: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v651: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:40 compute-0 ceph-mon[75144]: pgmap v651: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v652: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:41 compute-0 ceph-mon[75144]: pgmap v652: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:42 compute-0 podman[250335]: 2025-11-25 20:26:42.024769575 +0000 UTC m=+0.123882196 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 25 20:26:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v653: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:44 compute-0 ceph-mon[75144]: pgmap v653: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v654: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:45 compute-0 ceph-mon[75144]: pgmap v654: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v655: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:47 compute-0 ceph-mon[75144]: pgmap v655: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:26:48.943 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:26:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:26:48.944 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:26:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:26:48.944 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:26:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v656: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:49 compute-0 ceph-mon[75144]: pgmap v656: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v657: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:52 compute-0 ceph-mon[75144]: pgmap v657: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v658: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:53 compute-0 ceph-mon[75144]: pgmap v658: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v659: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:26:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:26:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:26:56 compute-0 ceph-mon[75144]: pgmap v659: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:26:56
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images']
Nov 25 20:26:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:26:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v660: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:57 compute-0 ceph-mon[75144]: pgmap v660: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:26:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v661: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:00 compute-0 ceph-mon[75144]: pgmap v661: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:00 compute-0 podman[250361]: 2025-11-25 20:27:00.990124558 +0000 UTC m=+0.082300827 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:27:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v662: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:27:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:27:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:27:02 compute-0 ceph-mon[75144]: pgmap v662: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v663: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:03 compute-0 podman[250380]: 2025-11-25 20:27:03.9629445 +0000 UTC m=+0.065516954 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 20:27:04 compute-0 sudo[250399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:04 compute-0 sudo[250399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:04 compute-0 sudo[250399]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:04 compute-0 sudo[250424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:27:04 compute-0 sudo[250424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:04 compute-0 sudo[250424]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Check health
Nov 25 20:27:04 compute-0 sudo[250449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:04 compute-0 sudo[250449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:04 compute-0 sudo[250449]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:04 compute-0 ceph-mon[75144]: pgmap v663: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:04 compute-0 sudo[250474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 20:27:04 compute-0 sudo[250474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:04 compute-0 sudo[250474]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:27:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:27:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:04 compute-0 sudo[250519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:04 compute-0 sudo[250519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:04 compute-0 sudo[250519]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:05 compute-0 sudo[250544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:27:05 compute-0 sudo[250544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:05 compute-0 sudo[250544]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:05 compute-0 sudo[250569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:05 compute-0 sudo[250569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:05 compute-0 sudo[250569]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:05 compute-0 sudo[250594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:27:05 compute-0 sudo[250594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v664: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:05 compute-0 sudo[250594]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:05 compute-0 ceph-mon[75144]: pgmap v664: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:27:05 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:27:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:27:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:05 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d3275526-90b0-421b-86dc-0e62ac8f1b12 does not exist
Nov 25 20:27:05 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 88e8fd46-a804-4c69-8fe3-acc02006871c does not exist
Nov 25 20:27:05 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev e2a1f748-97da-4a57-859c-555bb625bc98 does not exist
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:27:05 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:27:05 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:27:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:27:05 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:27:05 compute-0 sudo[250648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:06 compute-0 sudo[250648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:06 compute-0 sudo[250648]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:06 compute-0 sudo[250673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:27:06 compute-0 sudo[250673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:06 compute-0 sudo[250673]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:06 compute-0 sudo[250698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:06 compute-0 sudo[250698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:06 compute-0 sudo[250698]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:06 compute-0 sudo[250723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:27:06 compute-0 sudo[250723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:27:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.776137317 +0000 UTC m=+0.070662882 container create 56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_burnell, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 20:27:06 compute-0 systemd[1]: Started libpod-conmon-56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b.scope.
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.744582633 +0000 UTC m=+0.039108278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:27:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:27:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:27:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:27:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:27:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:27:06 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.878762884 +0000 UTC m=+0.173288459 container init 56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_burnell, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.890736854 +0000 UTC m=+0.185262419 container start 56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_burnell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.894447864 +0000 UTC m=+0.188973429 container attach 56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:27:06 compute-0 determined_burnell[250804]: 167 167
Nov 25 20:27:06 compute-0 systemd[1]: libpod-56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b.scope: Deactivated successfully.
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.898941484 +0000 UTC m=+0.193467049 container died 56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:27:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3cc56f6e0fb877e27312d31a892cb3fe777ed9d41b33773aa243d63d285bcec-merged.mount: Deactivated successfully.
Nov 25 20:27:06 compute-0 podman[250788]: 2025-11-25 20:27:06.952305922 +0000 UTC m=+0.246831497 container remove 56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_burnell, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:27:06 compute-0 systemd[1]: libpod-conmon-56ce17e66e445137fcdd4ec1abf140daefefcb4ec0080da99e676d2c0a89af6b.scope: Deactivated successfully.
Nov 25 20:27:07 compute-0 podman[250829]: 2025-11-25 20:27:07.201446311 +0000 UTC m=+0.070297943 container create b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:27:07 compute-0 systemd[1]: Started libpod-conmon-b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd.scope.
Nov 25 20:27:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v665: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:07 compute-0 podman[250829]: 2025-11-25 20:27:07.174762207 +0000 UTC m=+0.043613889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:27:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a019fd381aebcec26fed458ef223d96cedf2d50e12150953fea32b264c50853a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a019fd381aebcec26fed458ef223d96cedf2d50e12150953fea32b264c50853a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a019fd381aebcec26fed458ef223d96cedf2d50e12150953fea32b264c50853a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a019fd381aebcec26fed458ef223d96cedf2d50e12150953fea32b264c50853a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a019fd381aebcec26fed458ef223d96cedf2d50e12150953fea32b264c50853a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:07 compute-0 podman[250829]: 2025-11-25 20:27:07.309685968 +0000 UTC m=+0.178537600 container init b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:27:07 compute-0 podman[250829]: 2025-11-25 20:27:07.329443817 +0000 UTC m=+0.198295439 container start b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:27:07 compute-0 podman[250829]: 2025-11-25 20:27:07.334206374 +0000 UTC m=+0.203058026 container attach b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:27:07 compute-0 ceph-mon[75144]: pgmap v665: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:08 compute-0 competent_merkle[250846]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:27:08 compute-0 competent_merkle[250846]: --> relative data size: 1.0
Nov 25 20:27:08 compute-0 competent_merkle[250846]: --> All data devices are unavailable
Nov 25 20:27:08 compute-0 systemd[1]: libpod-b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd.scope: Deactivated successfully.
Nov 25 20:27:08 compute-0 systemd[1]: libpod-b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd.scope: Consumed 1.099s CPU time.
Nov 25 20:27:08 compute-0 podman[250829]: 2025-11-25 20:27:08.461861237 +0000 UTC m=+1.330712869 container died b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a019fd381aebcec26fed458ef223d96cedf2d50e12150953fea32b264c50853a-merged.mount: Deactivated successfully.
Nov 25 20:27:08 compute-0 podman[250829]: 2025-11-25 20:27:08.539808403 +0000 UTC m=+1.408659995 container remove b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:27:08 compute-0 systemd[1]: libpod-conmon-b0621b9ce9e8449c994d57d03b4855ac1d259ac45b9908bb2b911cd1316457dd.scope: Deactivated successfully.
Nov 25 20:27:08 compute-0 sudo[250723]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:08 compute-0 sudo[250889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:08 compute-0 sudo[250889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:08 compute-0 sudo[250889]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:08 compute-0 sudo[250914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:27:08 compute-0 sudo[250914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:08 compute-0 sudo[250914]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:08 compute-0 sudo[250939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:08 compute-0 sudo[250939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:08 compute-0 sudo[250939]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:08 compute-0 sudo[250964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:27:08 compute-0 sudo[250964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v666: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.426857275 +0000 UTC m=+0.058901637 container create badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_euclid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.398061315 +0000 UTC m=+0.030105717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:27:09 compute-0 systemd[1]: Started libpod-conmon-badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325.scope.
Nov 25 20:27:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.567600653 +0000 UTC m=+0.199645075 container init badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_euclid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.581022433 +0000 UTC m=+0.213066785 container start badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_euclid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.585277476 +0000 UTC m=+0.217321908 container attach badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:27:09 compute-0 nostalgic_euclid[251047]: 167 167
Nov 25 20:27:09 compute-0 systemd[1]: libpod-badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325.scope: Deactivated successfully.
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.588094751 +0000 UTC m=+0.220139103 container died badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-953d1ae3b85af7cb0ef4a9d698c804f172f8571a2db0ad0a19e3db7037dccee5-merged.mount: Deactivated successfully.
Nov 25 20:27:09 compute-0 podman[251031]: 2025-11-25 20:27:09.636675721 +0000 UTC m=+0.268720083 container remove badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_euclid, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:27:09 compute-0 systemd[1]: libpod-conmon-badb135dc9b32c72860da6cb519f3f9721c9812feb2388a0e780b028bf46f325.scope: Deactivated successfully.
Nov 25 20:27:09 compute-0 podman[251071]: 2025-11-25 20:27:09.894624425 +0000 UTC m=+0.073734814 container create 86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:27:09 compute-0 systemd[1]: Started libpod-conmon-86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60.scope.
Nov 25 20:27:09 compute-0 podman[251071]: 2025-11-25 20:27:09.864186242 +0000 UTC m=+0.043296711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:27:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a4a5458e77943e958c5fa332a47fbf1fd8e3798e5ffa278a15a5961bfb9a68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a4a5458e77943e958c5fa332a47fbf1fd8e3798e5ffa278a15a5961bfb9a68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a4a5458e77943e958c5fa332a47fbf1fd8e3798e5ffa278a15a5961bfb9a68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a4a5458e77943e958c5fa332a47fbf1fd8e3798e5ffa278a15a5961bfb9a68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:10 compute-0 podman[251071]: 2025-11-25 20:27:10.002616826 +0000 UTC m=+0.181727235 container init 86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:27:10 compute-0 podman[251071]: 2025-11-25 20:27:10.01397071 +0000 UTC m=+0.193081129 container start 86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:27:10 compute-0 podman[251071]: 2025-11-25 20:27:10.017618018 +0000 UTC m=+0.196728437 container attach 86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:27:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:10 compute-0 ceph-mon[75144]: pgmap v666: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:10 compute-0 trusting_euler[251087]: {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:     "0": [
Nov 25 20:27:10 compute-0 trusting_euler[251087]:         {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "devices": [
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "/dev/loop3"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             ],
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_name": "ceph_lv0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_size": "21470642176",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "name": "ceph_lv0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "tags": {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cluster_name": "ceph",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.crush_device_class": "",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.encrypted": "0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osd_id": "0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.type": "block",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.vdo": "0"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             },
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "type": "block",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "vg_name": "ceph_vg0"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:         }
Nov 25 20:27:10 compute-0 trusting_euler[251087]:     ],
Nov 25 20:27:10 compute-0 trusting_euler[251087]:     "1": [
Nov 25 20:27:10 compute-0 trusting_euler[251087]:         {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "devices": [
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "/dev/loop4"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             ],
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_name": "ceph_lv1",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_size": "21470642176",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "name": "ceph_lv1",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "tags": {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cluster_name": "ceph",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.crush_device_class": "",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.encrypted": "0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osd_id": "1",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.type": "block",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.vdo": "0"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             },
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "type": "block",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "vg_name": "ceph_vg1"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:         }
Nov 25 20:27:10 compute-0 trusting_euler[251087]:     ],
Nov 25 20:27:10 compute-0 trusting_euler[251087]:     "2": [
Nov 25 20:27:10 compute-0 trusting_euler[251087]:         {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "devices": [
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "/dev/loop5"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             ],
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_name": "ceph_lv2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_size": "21470642176",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "name": "ceph_lv2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "tags": {
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.cluster_name": "ceph",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.crush_device_class": "",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.encrypted": "0",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osd_id": "2",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.type": "block",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:                 "ceph.vdo": "0"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             },
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "type": "block",
Nov 25 20:27:10 compute-0 trusting_euler[251087]:             "vg_name": "ceph_vg2"
Nov 25 20:27:10 compute-0 trusting_euler[251087]:         }
Nov 25 20:27:10 compute-0 trusting_euler[251087]:     ]
Nov 25 20:27:10 compute-0 trusting_euler[251087]: }
Nov 25 20:27:10 compute-0 systemd[1]: libpod-86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60.scope: Deactivated successfully.
Nov 25 20:27:10 compute-0 podman[251071]: 2025-11-25 20:27:10.769871843 +0000 UTC m=+0.948982252 container died 86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4a4a5458e77943e958c5fa332a47fbf1fd8e3798e5ffa278a15a5961bfb9a68-merged.mount: Deactivated successfully.
Nov 25 20:27:10 compute-0 podman[251071]: 2025-11-25 20:27:10.849602896 +0000 UTC m=+1.028713315 container remove 86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:27:10 compute-0 systemd[1]: libpod-conmon-86e3cbc1929ab4ec1ac9da78829f764aeaf1a10c83518f464ae2fd5436348e60.scope: Deactivated successfully.
Nov 25 20:27:10 compute-0 sudo[250964]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:10 compute-0 sudo[251110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:11 compute-0 sudo[251110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:11 compute-0 sudo[251110]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:11 compute-0 sudo[251135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:27:11 compute-0 sudo[251135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:11 compute-0 sudo[251135]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:11 compute-0 sudo[251160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:11 compute-0 sudo[251160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:11 compute-0 sudo[251160]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v667: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:11 compute-0 sudo[251185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:27:11 compute-0 sudo[251185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.706112052 +0000 UTC m=+0.074200297 container create 063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 20:27:11 compute-0 systemd[1]: Started libpod-conmon-063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c.scope.
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.677728142 +0000 UTC m=+0.045816437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:27:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.788528228 +0000 UTC m=+0.156616473 container init 063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cohen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.800009385 +0000 UTC m=+0.168097630 container start 063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:27:11 compute-0 awesome_cohen[251265]: 167 167
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.803378855 +0000 UTC m=+0.171467140 container attach 063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:27:11 compute-0 systemd[1]: libpod-063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c.scope: Deactivated successfully.
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.805268626 +0000 UTC m=+0.173356921 container died 063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-974d41a1e0b333359f1da5d12f54687a09a995f3cb25cea38fa67da939d2c75b-merged.mount: Deactivated successfully.
Nov 25 20:27:11 compute-0 podman[251249]: 2025-11-25 20:27:11.846666204 +0000 UTC m=+0.214754479 container remove 063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cohen, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:27:11 compute-0 systemd[1]: libpod-conmon-063efa37b0bfbd6417c94b0cee55d3504f75848e39c82db6504860fd0fa3782c.scope: Deactivated successfully.
Nov 25 20:27:12 compute-0 podman[251289]: 2025-11-25 20:27:12.09422472 +0000 UTC m=+0.068923376 container create 5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:27:12 compute-0 systemd[1]: Started libpod-conmon-5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a.scope.
Nov 25 20:27:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b53e69c6c1dafb7302f68eb1eef838cb7694bb3ff82857f6fbcb3dee11dad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b53e69c6c1dafb7302f68eb1eef838cb7694bb3ff82857f6fbcb3dee11dad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:12 compute-0 podman[251289]: 2025-11-25 20:27:12.067308739 +0000 UTC m=+0.042007445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b53e69c6c1dafb7302f68eb1eef838cb7694bb3ff82857f6fbcb3dee11dad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b53e69c6c1dafb7302f68eb1eef838cb7694bb3ff82857f6fbcb3dee11dad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:27:12 compute-0 podman[251289]: 2025-11-25 20:27:12.16935477 +0000 UTC m=+0.144053416 container init 5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:27:12 compute-0 podman[251289]: 2025-11-25 20:27:12.182968265 +0000 UTC m=+0.157666881 container start 5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:27:12 compute-0 podman[251289]: 2025-11-25 20:27:12.186306725 +0000 UTC m=+0.161005341 container attach 5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:27:12 compute-0 podman[251303]: 2025-11-25 20:27:12.232363427 +0000 UTC m=+0.093691588 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 20:27:12 compute-0 ceph-mon[75144]: pgmap v667: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:13 compute-0 blissful_hertz[251307]: {
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "osd_id": 2,
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "type": "bluestore"
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:     },
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "osd_id": 1,
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "type": "bluestore"
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:     },
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "osd_id": 0,
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:         "type": "bluestore"
Nov 25 20:27:13 compute-0 blissful_hertz[251307]:     }
Nov 25 20:27:13 compute-0 blissful_hertz[251307]: }
Nov 25 20:27:13 compute-0 systemd[1]: libpod-5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a.scope: Deactivated successfully.
Nov 25 20:27:13 compute-0 podman[251289]: 2025-11-25 20:27:13.218674347 +0000 UTC m=+1.193373003 container died 5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hertz, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:27:13 compute-0 systemd[1]: libpod-5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a.scope: Consumed 1.046s CPU time.
Nov 25 20:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b53e69c6c1dafb7302f68eb1eef838cb7694bb3ff82857f6fbcb3dee11dad7-merged.mount: Deactivated successfully.
Nov 25 20:27:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v668: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:13 compute-0 podman[251289]: 2025-11-25 20:27:13.284586321 +0000 UTC m=+1.259284967 container remove 5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hertz, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:27:13 compute-0 systemd[1]: libpod-conmon-5465a532d803d7e19642760e628b602ffc5542fb6a801e9e2ba9aedb760bc18a.scope: Deactivated successfully.
Nov 25 20:27:13 compute-0 sudo[251185]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:27:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:27:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:13 compute-0 sudo[251377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:27:13 compute-0 sudo[251377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:13 compute-0 sudo[251377]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:13 compute-0 sudo[251402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:27:13 compute-0 sudo[251402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:27:13 compute-0 sudo[251402]: pam_unix(sudo:session): session closed for user root
Nov 25 20:27:14 compute-0 ceph-mon[75144]: pgmap v668: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:14 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:14 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:27:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v669: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:16 compute-0 ceph-mon[75144]: pgmap v669: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:27:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3413269129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:27:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:27:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3413269129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:27:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v670: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3413269129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:27:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3413269129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:27:18 compute-0 ceph-mon[75144]: pgmap v670: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v671: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:20 compute-0 ceph-mon[75144]: pgmap v671: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v672: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:21 compute-0 ceph-mon[75144]: pgmap v672: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v673: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:24 compute-0 ceph-mon[75144]: pgmap v673: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v674: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:26 compute-0 ceph-mon[75144]: pgmap v674: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:27:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:27:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:27:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:27:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:27:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:27:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v675: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:28 compute-0 ceph-mon[75144]: pgmap v675: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v676: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:29 compute-0 ceph-mon[75144]: pgmap v676: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v677: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:31 compute-0 podman[251427]: 2025-11-25 20:27:31.402644879 +0000 UTC m=+0.092767144 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 20:27:32 compute-0 ceph-mon[75144]: pgmap v677: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.550 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.551 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.573 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.574 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.574 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.574 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:32 compute-0 nova_compute[248866]: 2025-11-25 20:27:32.574 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.061 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.094 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.094 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.094 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.095 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.095 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:27:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v678: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:27:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082393620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.635 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.853 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.855 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5313MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.856 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:27:33 compute-0 nova_compute[248866]: 2025-11-25 20:27:33.857 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.018 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.019 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.034 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:27:34 compute-0 ceph-mon[75144]: pgmap v678: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:34 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3082393620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:27:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:27:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938124668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.481 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.491 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.515 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.518 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:27:34 compute-0 nova_compute[248866]: 2025-11-25 20:27:34.518 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:27:34 compute-0 podman[251490]: 2025-11-25 20:27:34.992416833 +0000 UTC m=+0.086731063 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 20:27:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v679: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:35 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1938124668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:27:36 compute-0 ceph-mon[75144]: pgmap v679: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v680: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:38 compute-0 ceph-mon[75144]: pgmap v680: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v681: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:40 compute-0 ceph-mon[75144]: pgmap v681: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v682: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:42 compute-0 ceph-mon[75144]: pgmap v682: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:43 compute-0 podman[251510]: 2025-11-25 20:27:43.048638594 +0000 UTC m=+0.136403251 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 20:27:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v683: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:43 compute-0 ceph-mon[75144]: pgmap v683: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v684: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:46 compute-0 ceph-mon[75144]: pgmap v684: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v685: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:48 compute-0 ceph-mon[75144]: pgmap v685: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:27:48.943 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:27:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:27:48.944 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:27:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:27:48.944 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:27:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v686: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:49 compute-0 ceph-mon[75144]: pgmap v686: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v687: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:52 compute-0 ceph-mon[75144]: pgmap v687: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v688: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:54 compute-0 ceph-mon[75144]: pgmap v688: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v689: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:27:56 compute-0 ceph-mon[75144]: pgmap v689: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:27:56
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 25 20:27:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:27:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v690: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:58 compute-0 ceph-mon[75144]: pgmap v690: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v691: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:27:59 compute-0 ceph-mon[75144]: pgmap v691: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v692: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:02 compute-0 podman[251536]: 2025-11-25 20:28:02.005786741 +0000 UTC m=+0.102382341 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:28:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:28:02 compute-0 ceph-mon[75144]: pgmap v692: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v693: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:03 compute-0 ceph-mon[75144]: pgmap v693: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v694: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:05 compute-0 ceph-mon[75144]: pgmap v694: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:05 compute-0 podman[251555]: 2025-11-25 20:28:05.990533297 +0000 UTC m=+0.081994465 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 20:28:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v695: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:08 compute-0 ceph-mon[75144]: pgmap v695: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v696: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:10 compute-0 ceph-mon[75144]: pgmap v696: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v697: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:11 compute-0 ceph-mon[75144]: pgmap v697: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v698: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:13 compute-0 sudo[251575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:13 compute-0 sudo[251575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:13 compute-0 sudo[251575]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:13 compute-0 sudo[251606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:28:13 compute-0 sudo[251606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:13 compute-0 sudo[251606]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:13 compute-0 podman[251599]: 2025-11-25 20:28:13.766402586 +0000 UTC m=+0.113676424 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:28:13 compute-0 sudo[251646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:13 compute-0 sudo[251646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:13 compute-0 sudo[251646]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:13 compute-0 sudo[251677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:28:13 compute-0 sudo[251677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:14 compute-0 ceph-mon[75144]: pgmap v698: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:14 compute-0 sudo[251677]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 40c8b963-8d1d-47ef-ac89-ea3d537d22e4 does not exist
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d59759c0-c696-4155-8f54-e08b7a89e066 does not exist
Nov 25 20:28:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 8c0afb3d-5c4a-4ba9-876e-3dc2cd3b1c42 does not exist
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:28:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:28:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:28:14 compute-0 sudo[251734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:14 compute-0 sudo[251734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:14 compute-0 sudo[251734]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:14 compute-0 sudo[251759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:28:14 compute-0 sudo[251759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:14 compute-0 sudo[251759]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:14 compute-0 sudo[251784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:14 compute-0 sudo[251784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:14 compute-0 sudo[251784]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:15 compute-0 sudo[251809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:28:15 compute-0 sudo[251809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v699: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:28:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.433056395 +0000 UTC m=+0.042658293 container create c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:28:15 compute-0 systemd[1]: Started libpod-conmon-c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a.scope.
Nov 25 20:28:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.414689984 +0000 UTC m=+0.024291902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.523422394 +0000 UTC m=+0.133024372 container init c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.533910544 +0000 UTC m=+0.143512442 container start c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.537293956 +0000 UTC m=+0.146895964 container attach c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:28:15 compute-0 crazy_kepler[251889]: 167 167
Nov 25 20:28:15 compute-0 systemd[1]: libpod-c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a.scope: Deactivated successfully.
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.540674136 +0000 UTC m=+0.150276104 container died c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff37963693e8cb46c0a89d0e1e0922671c81ddb30b4d13adc3e80ae0257aaac-merged.mount: Deactivated successfully.
Nov 25 20:28:15 compute-0 podman[251873]: 2025-11-25 20:28:15.593576652 +0000 UTC m=+0.203178580 container remove c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:28:15 compute-0 systemd[1]: libpod-conmon-c7e7dc026e7ff1636a7f364078f9bffeecf1accafc687e315d6ffdfb53793c5a.scope: Deactivated successfully.
Nov 25 20:28:15 compute-0 podman[251912]: 2025-11-25 20:28:15.794659954 +0000 UTC m=+0.044985404 container create 96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:28:15 compute-0 systemd[1]: Started libpod-conmon-96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715.scope.
Nov 25 20:28:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226ab538d776badbbae80dd0ba96af23f0eeffbc608d4bb82fe80419cbfb306d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:15 compute-0 podman[251912]: 2025-11-25 20:28:15.777372881 +0000 UTC m=+0.027698381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226ab538d776badbbae80dd0ba96af23f0eeffbc608d4bb82fe80419cbfb306d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226ab538d776badbbae80dd0ba96af23f0eeffbc608d4bb82fe80419cbfb306d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226ab538d776badbbae80dd0ba96af23f0eeffbc608d4bb82fe80419cbfb306d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226ab538d776badbbae80dd0ba96af23f0eeffbc608d4bb82fe80419cbfb306d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:15 compute-0 podman[251912]: 2025-11-25 20:28:15.891073585 +0000 UTC m=+0.141399075 container init 96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chatterjee, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:28:15 compute-0 podman[251912]: 2025-11-25 20:28:15.903325022 +0000 UTC m=+0.153650522 container start 96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:28:15 compute-0 podman[251912]: 2025-11-25 20:28:15.907901895 +0000 UTC m=+0.158227355 container attach 96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:28:16 compute-0 ceph-mon[75144]: Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:28:16 compute-0 ceph-mon[75144]: Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:28:16 compute-0 ceph-mon[75144]: pgmap v699: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:28:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664624776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:28:17 compute-0 peaceful_chatterjee[251929]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:28:17 compute-0 peaceful_chatterjee[251929]: --> relative data size: 1.0
Nov 25 20:28:17 compute-0 peaceful_chatterjee[251929]: --> All data devices are unavailable
Nov 25 20:28:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:28:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664624776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:28:17 compute-0 systemd[1]: libpod-96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715.scope: Deactivated successfully.
Nov 25 20:28:17 compute-0 podman[251912]: 2025-11-25 20:28:17.061597935 +0000 UTC m=+1.311923425 container died 96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:28:17 compute-0 systemd[1]: libpod-96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715.scope: Consumed 1.108s CPU time.
Nov 25 20:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-226ab538d776badbbae80dd0ba96af23f0eeffbc608d4bb82fe80419cbfb306d-merged.mount: Deactivated successfully.
Nov 25 20:28:17 compute-0 podman[251912]: 2025-11-25 20:28:17.242829916 +0000 UTC m=+1.493155366 container remove 96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:28:17 compute-0 systemd[1]: libpod-conmon-96fb8fc08af93e06f9330a920b305ecf36ddc56b015136a0858f85ac723e7715.scope: Deactivated successfully.
Nov 25 20:28:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v700: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:17 compute-0 sudo[251809]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:17 compute-0 sudo[251972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:17 compute-0 sudo[251972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:17 compute-0 sudo[251972]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:17 compute-0 sudo[251997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:28:17 compute-0 sudo[251997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:17 compute-0 sudo[251997]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:17 compute-0 sudo[252022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:17 compute-0 sudo[252022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:17 compute-0 sudo[252022]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:17 compute-0 sudo[252047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:28:17 compute-0 sudo[252047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2664624776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:28:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2664624776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.118192935 +0000 UTC m=+0.072102561 container create dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:28:18 compute-0 systemd[1]: Started libpod-conmon-dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077.scope.
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.087418732 +0000 UTC m=+0.041328408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:28:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.220116223 +0000 UTC m=+0.174025849 container init dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.231352684 +0000 UTC m=+0.185262320 container start dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goodall, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:28:18 compute-0 jovial_goodall[252128]: 167 167
Nov 25 20:28:18 compute-0 systemd[1]: libpod-dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077.scope: Deactivated successfully.
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.239696407 +0000 UTC m=+0.193606033 container attach dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.241244649 +0000 UTC m=+0.195154285 container died dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee83f4eb3b4de659ee403da174498175cfa12f207fc163d6764ab409ab674a6-merged.mount: Deactivated successfully.
Nov 25 20:28:18 compute-0 podman[252112]: 2025-11-25 20:28:18.295692976 +0000 UTC m=+0.249602612 container remove dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:28:18 compute-0 systemd[1]: libpod-conmon-dc5a889e6fbd0afa10de83866b2aa17eabae673c54ba5afa1a4b23cc102cc077.scope: Deactivated successfully.
Nov 25 20:28:18 compute-0 podman[252153]: 2025-11-25 20:28:18.501510955 +0000 UTC m=+0.039551889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:28:18 compute-0 podman[252153]: 2025-11-25 20:28:18.596104528 +0000 UTC m=+0.134145412 container create e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_gould, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:28:18 compute-0 ceph-mon[75144]: pgmap v700: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:18 compute-0 systemd[1]: Started libpod-conmon-e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8.scope.
Nov 25 20:28:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5c702577c08c995b749efb6aee22b3ae905cbd85304333cdc225866f766d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5c702577c08c995b749efb6aee22b3ae905cbd85304333cdc225866f766d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5c702577c08c995b749efb6aee22b3ae905cbd85304333cdc225866f766d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5c702577c08c995b749efb6aee22b3ae905cbd85304333cdc225866f766d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:18 compute-0 podman[252153]: 2025-11-25 20:28:18.720212879 +0000 UTC m=+0.258253733 container init e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:28:18 compute-0 podman[252153]: 2025-11-25 20:28:18.733968077 +0000 UTC m=+0.272008951 container start e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_gould, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:28:18 compute-0 podman[252153]: 2025-11-25 20:28:18.737254445 +0000 UTC m=+0.275295309 container attach e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:28:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v701: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:19 compute-0 lucid_gould[252170]: {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:     "0": [
Nov 25 20:28:19 compute-0 lucid_gould[252170]:         {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "devices": [
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "/dev/loop3"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             ],
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_name": "ceph_lv0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_size": "21470642176",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "name": "ceph_lv0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "tags": {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cluster_name": "ceph",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.crush_device_class": "",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.encrypted": "0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osd_id": "0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.type": "block",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.vdo": "0"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             },
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "type": "block",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "vg_name": "ceph_vg0"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:         }
Nov 25 20:28:19 compute-0 lucid_gould[252170]:     ],
Nov 25 20:28:19 compute-0 lucid_gould[252170]:     "1": [
Nov 25 20:28:19 compute-0 lucid_gould[252170]:         {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "devices": [
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "/dev/loop4"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             ],
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_name": "ceph_lv1",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_size": "21470642176",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "name": "ceph_lv1",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "tags": {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cluster_name": "ceph",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.crush_device_class": "",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.encrypted": "0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osd_id": "1",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.type": "block",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.vdo": "0"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             },
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "type": "block",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "vg_name": "ceph_vg1"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:         }
Nov 25 20:28:19 compute-0 lucid_gould[252170]:     ],
Nov 25 20:28:19 compute-0 lucid_gould[252170]:     "2": [
Nov 25 20:28:19 compute-0 lucid_gould[252170]:         {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "devices": [
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "/dev/loop5"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             ],
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_name": "ceph_lv2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_size": "21470642176",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "name": "ceph_lv2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "tags": {
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.cluster_name": "ceph",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.crush_device_class": "",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.encrypted": "0",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osd_id": "2",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.type": "block",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:                 "ceph.vdo": "0"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             },
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "type": "block",
Nov 25 20:28:19 compute-0 lucid_gould[252170]:             "vg_name": "ceph_vg2"
Nov 25 20:28:19 compute-0 lucid_gould[252170]:         }
Nov 25 20:28:19 compute-0 lucid_gould[252170]:     ]
Nov 25 20:28:19 compute-0 lucid_gould[252170]: }
Nov 25 20:28:19 compute-0 systemd[1]: libpod-e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8.scope: Deactivated successfully.
Nov 25 20:28:19 compute-0 podman[252153]: 2025-11-25 20:28:19.477167759 +0000 UTC m=+1.015208623 container died e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-af5c702577c08c995b749efb6aee22b3ae905cbd85304333cdc225866f766d81-merged.mount: Deactivated successfully.
Nov 25 20:28:19 compute-0 ceph-mon[75144]: pgmap v701: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:19 compute-0 podman[252153]: 2025-11-25 20:28:19.898694832 +0000 UTC m=+1.436735706 container remove e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:28:19 compute-0 systemd[1]: libpod-conmon-e5c3db929f223a82d4144bd37ba00f5548abbe28b28c801678f714d8f155d9b8.scope: Deactivated successfully.
Nov 25 20:28:19 compute-0 sudo[252047]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:20 compute-0 sudo[252191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:20 compute-0 sudo[252191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:20 compute-0 sudo[252191]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:20 compute-0 sudo[252216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:28:20 compute-0 sudo[252216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:20 compute-0 sudo[252216]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:20 compute-0 sudo[252241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:20 compute-0 sudo[252241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:20 compute-0 sudo[252241]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:20 compute-0 sudo[252266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:28:20 compute-0 sudo[252266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.677395145 +0000 UTC m=+0.058291172 container create 7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:28:20 compute-0 systemd[1]: Started libpod-conmon-7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb.scope.
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.649466967 +0000 UTC m=+0.030363034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:28:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.78895605 +0000 UTC m=+0.169852117 container init 7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.800381326 +0000 UTC m=+0.181277313 container start 7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.806036207 +0000 UTC m=+0.186932274 container attach 7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:28:20 compute-0 blissful_khayyam[252346]: 167 167
Nov 25 20:28:20 compute-0 systemd[1]: libpod-7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb.scope: Deactivated successfully.
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.807763394 +0000 UTC m=+0.188659421 container died 7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-781b6bfc13d432c8be3eabc47c6be38655c3c6e615890b0f2941b35c219f998d-merged.mount: Deactivated successfully.
Nov 25 20:28:20 compute-0 podman[252330]: 2025-11-25 20:28:20.947094033 +0000 UTC m=+0.327990060 container remove 7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:28:20 compute-0 systemd[1]: libpod-conmon-7b52e66e150e859742783a477ce25a6b1a5a8f321b10cd2bdaedf8e5614d28fb.scope: Deactivated successfully.
Nov 25 20:28:21 compute-0 podman[252370]: 2025-11-25 20:28:21.219530975 +0000 UTC m=+0.083404694 container create 17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:28:21 compute-0 podman[252370]: 2025-11-25 20:28:21.1860974 +0000 UTC m=+0.049971199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:28:21 compute-0 systemd[1]: Started libpod-conmon-17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260.scope.
Nov 25 20:28:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v702: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78894b4cb866aa7685e854d33c32c51ef004c13c5aa56c1eb47891aae26867f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78894b4cb866aa7685e854d33c32c51ef004c13c5aa56c1eb47891aae26867f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78894b4cb866aa7685e854d33c32c51ef004c13c5aa56c1eb47891aae26867f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78894b4cb866aa7685e854d33c32c51ef004c13c5aa56c1eb47891aae26867f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:28:21 compute-0 podman[252370]: 2025-11-25 20:28:21.403604292 +0000 UTC m=+0.267478051 container init 17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:28:21 compute-0 podman[252370]: 2025-11-25 20:28:21.415904481 +0000 UTC m=+0.279778240 container start 17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:28:21 compute-0 podman[252370]: 2025-11-25 20:28:21.428245552 +0000 UTC m=+0.292119351 container attach 17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:28:22 compute-0 ceph-mon[75144]: pgmap v702: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:22 compute-0 modest_noyce[252387]: {
Nov 25 20:28:22 compute-0 modest_noyce[252387]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "osd_id": 2,
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "type": "bluestore"
Nov 25 20:28:22 compute-0 modest_noyce[252387]:     },
Nov 25 20:28:22 compute-0 modest_noyce[252387]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "osd_id": 1,
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "type": "bluestore"
Nov 25 20:28:22 compute-0 modest_noyce[252387]:     },
Nov 25 20:28:22 compute-0 modest_noyce[252387]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "osd_id": 0,
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:28:22 compute-0 modest_noyce[252387]:         "type": "bluestore"
Nov 25 20:28:22 compute-0 modest_noyce[252387]:     }
Nov 25 20:28:22 compute-0 modest_noyce[252387]: }
Nov 25 20:28:22 compute-0 systemd[1]: libpod-17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260.scope: Deactivated successfully.
Nov 25 20:28:22 compute-0 systemd[1]: libpod-17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260.scope: Consumed 1.176s CPU time.
Nov 25 20:28:22 compute-0 podman[252420]: 2025-11-25 20:28:22.65701654 +0000 UTC m=+0.044877052 container died 17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c78894b4cb866aa7685e854d33c32c51ef004c13c5aa56c1eb47891aae26867f-merged.mount: Deactivated successfully.
Nov 25 20:28:22 compute-0 podman[252420]: 2025-11-25 20:28:22.810781446 +0000 UTC m=+0.198641918 container remove 17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:28:22 compute-0 systemd[1]: libpod-conmon-17b3431ff1e40b346bdd2749be0b87c09190477e0fcb8ed7964f72659fa18260.scope: Deactivated successfully.
Nov 25 20:28:22 compute-0 sudo[252266]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:28:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:28:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:28:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:28:23 compute-0 sudo[252435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:28:23 compute-0 sudo[252435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:23 compute-0 sudo[252435]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:23 compute-0 sudo[252460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:28:23 compute-0 sudo[252460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:28:23 compute-0 sudo[252460]: pam_unix(sudo:session): session closed for user root
Nov 25 20:28:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v703: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:28:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:28:23 compute-0 ceph-mon[75144]: pgmap v703: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v704: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:26 compute-0 ceph-mon[75144]: pgmap v704: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:28:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:28:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:28:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:28:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:28:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:28:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v705: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:28 compute-0 ceph-mon[75144]: pgmap v705: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v706: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:29 compute-0 ceph-mon[75144]: pgmap v706: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v707: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:32 compute-0 ceph-mon[75144]: pgmap v707: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:33 compute-0 podman[252485]: 2025-11-25 20:28:33.033040924 +0000 UTC m=+0.115792080 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 20:28:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v708: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.499 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.500 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.500 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.500 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.535 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.537 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.537 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.537 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.538 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.538 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.538 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.588 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.589 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.589 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.589 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:28:33 compute-0 nova_compute[248866]: 2025-11-25 20:28:33.590 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:28:33 compute-0 ceph-mon[75144]: pgmap v708: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:28:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212068789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.043 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.253 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.256 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5304MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.256 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.256 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.330 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.330 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.347 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:28:34 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4212068789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:28:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:28:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709065484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.812 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.821 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.839 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.842 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:28:34 compute-0 nova_compute[248866]: 2025-11-25 20:28:34.843 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:28:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v709: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:35 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1709065484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:28:35 compute-0 ceph-mon[75144]: pgmap v709: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:36 compute-0 nova_compute[248866]: 2025-11-25 20:28:36.350 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:36 compute-0 nova_compute[248866]: 2025-11-25 20:28:36.350 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:28:37 compute-0 podman[252549]: 2025-11-25 20:28:37.003128037 +0000 UTC m=+0.088936761 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:28:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v710: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:38 compute-0 ceph-mon[75144]: pgmap v710: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v711: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:40 compute-0 ceph-mon[75144]: pgmap v711: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v712: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:42 compute-0 ceph-mon[75144]: pgmap v712: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v713: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:44 compute-0 podman[252570]: 2025-11-25 20:28:44.058248104 +0000 UTC m=+0.143317917 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 20:28:44 compute-0 ceph-mon[75144]: pgmap v713: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v714: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:46 compute-0 ceph-mon[75144]: pgmap v714: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v715: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:48 compute-0 ceph-mon[75144]: pgmap v715: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:28:48.945 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:28:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:28:48.945 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:28:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:28:48.946 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:28:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v716: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:50 compute-0 ceph-mon[75144]: pgmap v716: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v717: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:52 compute-0 ceph-mon[75144]: pgmap v717: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v718: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:53 compute-0 ceph-mon[75144]: pgmap v718: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:28:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v719: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:56 compute-0 ceph-mon[75144]: pgmap v719: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:28:56
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Nov 25 20:28:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:28:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v720: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:58 compute-0 ceph-mon[75144]: pgmap v720: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:28:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v721: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:00 compute-0 ceph-mon[75144]: pgmap v721: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v722: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:29:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:29:02 compute-0 ceph-mon[75144]: pgmap v722: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v723: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:04 compute-0 podman[252596]: 2025-11-25 20:29:04.001680669 +0000 UTC m=+0.089293512 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 20:29:04 compute-0 ceph-mon[75144]: pgmap v723: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v724: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:06 compute-0 ceph-mon[75144]: pgmap v724: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v725: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:08 compute-0 podman[252615]: 2025-11-25 20:29:08.001117867 +0000 UTC m=+0.090562235 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 20:29:08 compute-0 ceph-mon[75144]: pgmap v725: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v726: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:10 compute-0 ceph-mon[75144]: pgmap v726: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v727: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:12 compute-0 ceph-mon[75144]: pgmap v727: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v728: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:13 compute-0 ceph-mon[75144]: pgmap v728: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:15 compute-0 podman[252636]: 2025-11-25 20:29:15.05218635 +0000 UTC m=+0.138930522 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 20:29:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v729: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:16 compute-0 ceph-mon[75144]: pgmap v729: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:29:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60300105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:29:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:29:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60300105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:29:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v730: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/60300105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:29:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/60300105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:29:18 compute-0 ceph-mon[75144]: pgmap v730: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v731: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:20 compute-0 ceph-mon[75144]: pgmap v731: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v732: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:22 compute-0 ceph-mon[75144]: pgmap v732: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:23 compute-0 sudo[252663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:23 compute-0 sudo[252663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:23 compute-0 sudo[252663]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v733: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:23 compute-0 sudo[252688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:29:23 compute-0 sudo[252688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:23 compute-0 sudo[252688]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:23 compute-0 sudo[252713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:23 compute-0 sudo[252713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:23 compute-0 sudo[252713]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:23 compute-0 sudo[252738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:29:23 compute-0 sudo[252738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:24 compute-0 sudo[252738]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:29:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:29:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:29:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:29:24 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 9a74e53a-258e-4d75-93f7-0332d6908613 does not exist
Nov 25 20:29:24 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 985df011-b796-4537-9cb5-a00ac9beb8d9 does not exist
Nov 25 20:29:24 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev b22a3f89-8718-4cf5-bfc7-57a24657fd8e does not exist
Nov 25 20:29:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:29:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:29:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:29:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:29:24 compute-0 sudo[252794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:24 compute-0 sudo[252794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:24 compute-0 sudo[252794]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:24 compute-0 sudo[252819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:29:24 compute-0 sudo[252819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:24 compute-0 sudo[252819]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:24 compute-0 sudo[252844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:24 compute-0 sudo[252844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:24 compute-0 sudo[252844]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:24 compute-0 ceph-mon[75144]: pgmap v733: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:29:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:29:24 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:29:24 compute-0 sudo[252869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:29:24 compute-0 sudo[252869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:24 compute-0 podman[252934]: 2025-11-25 20:29:24.972731574 +0000 UTC m=+0.070672002 container create 55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:29:25 compute-0 systemd[1]: Started libpod-conmon-55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8.scope.
Nov 25 20:29:25 compute-0 podman[252934]: 2025-11-25 20:29:24.942508314 +0000 UTC m=+0.040448782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:29:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:29:25 compute-0 podman[252934]: 2025-11-25 20:29:25.07754167 +0000 UTC m=+0.175482128 container init 55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:29:25 compute-0 podman[252934]: 2025-11-25 20:29:25.090880494 +0000 UTC m=+0.188820922 container start 55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:29:25 compute-0 podman[252934]: 2025-11-25 20:29:25.095559438 +0000 UTC m=+0.193499906 container attach 55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:29:25 compute-0 trusting_allen[252951]: 167 167
Nov 25 20:29:25 compute-0 systemd[1]: libpod-55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8.scope: Deactivated successfully.
Nov 25 20:29:25 compute-0 podman[252934]: 2025-11-25 20:29:25.101206888 +0000 UTC m=+0.199147316 container died 55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-51ab14700cd3202547d920c0c712315b1a5f1a6b50944a0063dbc13726c55023-merged.mount: Deactivated successfully.
Nov 25 20:29:25 compute-0 podman[252934]: 2025-11-25 20:29:25.156681776 +0000 UTC m=+0.254622204 container remove 55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:29:25 compute-0 systemd[1]: libpod-conmon-55961bab47dc3f017c0c69ea2ec51c398e4d8033073df66654876b305fe250b8.scope: Deactivated successfully.
Nov 25 20:29:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v734: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:25 compute-0 podman[252974]: 2025-11-25 20:29:25.418936623 +0000 UTC m=+0.074832253 container create 0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:29:25 compute-0 systemd[1]: Started libpod-conmon-0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75.scope.
Nov 25 20:29:25 compute-0 podman[252974]: 2025-11-25 20:29:25.386439082 +0000 UTC m=+0.042334752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:29:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149d7e706879de53b6ad605e5e8042de70c034d37d02d59410aff973cf4a6bff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149d7e706879de53b6ad605e5e8042de70c034d37d02d59410aff973cf4a6bff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149d7e706879de53b6ad605e5e8042de70c034d37d02d59410aff973cf4a6bff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149d7e706879de53b6ad605e5e8042de70c034d37d02d59410aff973cf4a6bff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149d7e706879de53b6ad605e5e8042de70c034d37d02d59410aff973cf4a6bff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:25 compute-0 podman[252974]: 2025-11-25 20:29:25.542704291 +0000 UTC m=+0.198599921 container init 0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:29:25 compute-0 podman[252974]: 2025-11-25 20:29:25.561642443 +0000 UTC m=+0.217538063 container start 0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:29:25 compute-0 podman[252974]: 2025-11-25 20:29:25.566504922 +0000 UTC m=+0.222400562 container attach 0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:29:26 compute-0 ceph-mon[75144]: pgmap v734: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:26 compute-0 laughing_northcutt[252990]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:29:26 compute-0 laughing_northcutt[252990]: --> relative data size: 1.0
Nov 25 20:29:26 compute-0 laughing_northcutt[252990]: --> All data devices are unavailable
Nov 25 20:29:26 compute-0 systemd[1]: libpod-0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75.scope: Deactivated successfully.
Nov 25 20:29:26 compute-0 podman[252974]: 2025-11-25 20:29:26.700137999 +0000 UTC m=+1.356033659 container died 0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:29:26 compute-0 systemd[1]: libpod-0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75.scope: Consumed 1.089s CPU time.
Nov 25 20:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-149d7e706879de53b6ad605e5e8042de70c034d37d02d59410aff973cf4a6bff-merged.mount: Deactivated successfully.
Nov 25 20:29:26 compute-0 podman[252974]: 2025-11-25 20:29:26.772621049 +0000 UTC m=+1.428516649 container remove 0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:29:26 compute-0 systemd[1]: libpod-conmon-0d3697fee902c67156e607cd75b6be6a341c949fd4ca6943534fb6aabd914b75.scope: Deactivated successfully.
Nov 25 20:29:26 compute-0 sudo[252869]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:29:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:29:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:29:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:29:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:29:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:29:26 compute-0 sudo[253034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:26 compute-0 sudo[253034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:26 compute-0 sudo[253034]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:27 compute-0 sudo[253059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:29:27 compute-0 sudo[253059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:27 compute-0 sudo[253059]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:27 compute-0 sudo[253084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:27 compute-0 sudo[253084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:27 compute-0 sudo[253084]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:27 compute-0 sudo[253109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:29:27 compute-0 sudo[253109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v735: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.694652521 +0000 UTC m=+0.067552410 container create 49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:29:27 compute-0 systemd[1]: Started libpod-conmon-49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a.scope.
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.670671037 +0000 UTC m=+0.043570956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:29:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.799878789 +0000 UTC m=+0.172778728 container init 49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.811174008 +0000 UTC m=+0.184073887 container start 49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_liskov, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.815473692 +0000 UTC m=+0.188373641 container attach 49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:29:27 compute-0 strange_liskov[253190]: 167 167
Nov 25 20:29:27 compute-0 systemd[1]: libpod-49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a.scope: Deactivated successfully.
Nov 25 20:29:27 compute-0 conmon[253190]: conmon 49c92feff86671a464b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a.scope/container/memory.events
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.821300907 +0000 UTC m=+0.194200816 container died 49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbdf581eda23ac2d7be00fe9ddd12e1413baaa1dc49e6704447376e452bbd445-merged.mount: Deactivated successfully.
Nov 25 20:29:27 compute-0 podman[253174]: 2025-11-25 20:29:27.873623462 +0000 UTC m=+0.246523321 container remove 49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:29:27 compute-0 systemd[1]: libpod-conmon-49c92feff86671a464b8b3399f84c53698650501983212f0954478b8e0f3ec7a.scope: Deactivated successfully.
Nov 25 20:29:28 compute-0 podman[253216]: 2025-11-25 20:29:28.107660422 +0000 UTC m=+0.073618801 container create 29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:29:28 compute-0 systemd[1]: Started libpod-conmon-29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e.scope.
Nov 25 20:29:28 compute-0 podman[253216]: 2025-11-25 20:29:28.078670204 +0000 UTC m=+0.044628643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:29:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d878c77d8423aaf0eeb3806fe3d192605afe2f9e424f922a56975a1f0de617/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d878c77d8423aaf0eeb3806fe3d192605afe2f9e424f922a56975a1f0de617/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d878c77d8423aaf0eeb3806fe3d192605afe2f9e424f922a56975a1f0de617/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d878c77d8423aaf0eeb3806fe3d192605afe2f9e424f922a56975a1f0de617/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:28 compute-0 podman[253216]: 2025-11-25 20:29:28.201144907 +0000 UTC m=+0.167103316 container init 29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:29:28 compute-0 podman[253216]: 2025-11-25 20:29:28.209728205 +0000 UTC m=+0.175686564 container start 29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:29:28 compute-0 podman[253216]: 2025-11-25 20:29:28.213857285 +0000 UTC m=+0.179815674 container attach 29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:29:28 compute-0 ceph-mon[75144]: pgmap v735: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:28 compute-0 strange_tharp[253233]: {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:     "0": [
Nov 25 20:29:28 compute-0 strange_tharp[253233]:         {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "devices": [
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "/dev/loop3"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             ],
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_name": "ceph_lv0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_size": "21470642176",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "name": "ceph_lv0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "tags": {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cluster_name": "ceph",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.crush_device_class": "",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.encrypted": "0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osd_id": "0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.type": "block",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.vdo": "0"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             },
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "type": "block",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "vg_name": "ceph_vg0"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:         }
Nov 25 20:29:28 compute-0 strange_tharp[253233]:     ],
Nov 25 20:29:28 compute-0 strange_tharp[253233]:     "1": [
Nov 25 20:29:28 compute-0 strange_tharp[253233]:         {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "devices": [
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "/dev/loop4"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             ],
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_name": "ceph_lv1",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_size": "21470642176",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "name": "ceph_lv1",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "tags": {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cluster_name": "ceph",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.crush_device_class": "",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.encrypted": "0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osd_id": "1",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.type": "block",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.vdo": "0"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             },
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "type": "block",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "vg_name": "ceph_vg1"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:         }
Nov 25 20:29:28 compute-0 strange_tharp[253233]:     ],
Nov 25 20:29:28 compute-0 strange_tharp[253233]:     "2": [
Nov 25 20:29:28 compute-0 strange_tharp[253233]:         {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "devices": [
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "/dev/loop5"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             ],
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_name": "ceph_lv2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_size": "21470642176",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "name": "ceph_lv2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "tags": {
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.cluster_name": "ceph",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.crush_device_class": "",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.encrypted": "0",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osd_id": "2",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.type": "block",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:                 "ceph.vdo": "0"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             },
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "type": "block",
Nov 25 20:29:28 compute-0 strange_tharp[253233]:             "vg_name": "ceph_vg2"
Nov 25 20:29:28 compute-0 strange_tharp[253233]:         }
Nov 25 20:29:28 compute-0 strange_tharp[253233]:     ]
Nov 25 20:29:28 compute-0 strange_tharp[253233]: }
Nov 25 20:29:29 compute-0 systemd[1]: libpod-29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e.scope: Deactivated successfully.
Nov 25 20:29:29 compute-0 podman[253216]: 2025-11-25 20:29:29.011503843 +0000 UTC m=+0.977462262 container died 29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:29:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d878c77d8423aaf0eeb3806fe3d192605afe2f9e424f922a56975a1f0de617-merged.mount: Deactivated successfully.
Nov 25 20:29:29 compute-0 podman[253216]: 2025-11-25 20:29:29.092866528 +0000 UTC m=+1.058824907 container remove 29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:29:29 compute-0 systemd[1]: libpod-conmon-29c1b2cbaf31ceab30476f64bcaf92708f269b5181867dcb74dc5677f255901e.scope: Deactivated successfully.
Nov 25 20:29:29 compute-0 sudo[253109]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:29 compute-0 sudo[253256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:29 compute-0 sudo[253256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:29 compute-0 sudo[253256]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v736: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:29 compute-0 sudo[253281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:29:29 compute-0 sudo[253281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:29 compute-0 sudo[253281]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:29 compute-0 sudo[253306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:29 compute-0 sudo[253306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:29 compute-0 sudo[253306]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:29 compute-0 sudo[253331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:29:29 compute-0 sudo[253331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:29 compute-0 podman[253396]: 2025-11-25 20:29:29.979932213 +0000 UTC m=+0.046046200 container create 2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_colden, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:29:30 compute-0 systemd[1]: Started libpod-conmon-2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156.scope.
Nov 25 20:29:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:29:30 compute-0 podman[253396]: 2025-11-25 20:29:29.962252115 +0000 UTC m=+0.028366152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:29:30 compute-0 podman[253396]: 2025-11-25 20:29:30.077093287 +0000 UTC m=+0.143207374 container init 2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:29:30 compute-0 podman[253396]: 2025-11-25 20:29:30.089053554 +0000 UTC m=+0.155167571 container start 2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_colden, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:29:30 compute-0 podman[253396]: 2025-11-25 20:29:30.093530963 +0000 UTC m=+0.159644990 container attach 2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:29:30 compute-0 condescending_colden[253412]: 167 167
Nov 25 20:29:30 compute-0 systemd[1]: libpod-2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156.scope: Deactivated successfully.
Nov 25 20:29:30 compute-0 podman[253396]: 2025-11-25 20:29:30.097783415 +0000 UTC m=+0.163897472 container died 2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_colden, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe76e2707ba94560131d1e916a97c3ff596aae8e2be054b32c0f87cb2450babf-merged.mount: Deactivated successfully.
Nov 25 20:29:30 compute-0 podman[253396]: 2025-11-25 20:29:30.151295252 +0000 UTC m=+0.217409279 container remove 2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_colden, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:29:30 compute-0 systemd[1]: libpod-conmon-2fc453d705f5c8fd4b4da8e338de6cf46d0ce75781202348cdbf8003e5265156.scope: Deactivated successfully.
Nov 25 20:29:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:30 compute-0 podman[253438]: 2025-11-25 20:29:30.403103902 +0000 UTC m=+0.065923316 container create e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:29:30 compute-0 systemd[1]: Started libpod-conmon-e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a.scope.
Nov 25 20:29:30 compute-0 ceph-mon[75144]: pgmap v736: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:30 compute-0 podman[253438]: 2025-11-25 20:29:30.37697555 +0000 UTC m=+0.039794994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:29:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc953c3318dffab112aa0d8d2b3acff0c7b619f56a339b9540e2fc4c3e06379/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc953c3318dffab112aa0d8d2b3acff0c7b619f56a339b9540e2fc4c3e06379/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc953c3318dffab112aa0d8d2b3acff0c7b619f56a339b9540e2fc4c3e06379/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc953c3318dffab112aa0d8d2b3acff0c7b619f56a339b9540e2fc4c3e06379/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:29:30 compute-0 podman[253438]: 2025-11-25 20:29:30.521624432 +0000 UTC m=+0.184443876 container init e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:29:30 compute-0 podman[253438]: 2025-11-25 20:29:30.53626259 +0000 UTC m=+0.199082004 container start e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:29:30 compute-0 podman[253438]: 2025-11-25 20:29:30.539937447 +0000 UTC m=+0.202756901 container attach e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:29:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v737: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]: {
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "osd_id": 2,
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "type": "bluestore"
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:     },
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "osd_id": 1,
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "type": "bluestore"
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:     },
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "osd_id": 0,
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:         "type": "bluestore"
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]:     }
Nov 25 20:29:31 compute-0 dreamy_mirzakhani[253454]: }
Nov 25 20:29:31 compute-0 systemd[1]: libpod-e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a.scope: Deactivated successfully.
Nov 25 20:29:31 compute-0 podman[253438]: 2025-11-25 20:29:31.665378237 +0000 UTC m=+1.328197671 container died e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:29:31 compute-0 systemd[1]: libpod-e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a.scope: Consumed 1.136s CPU time.
Nov 25 20:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc953c3318dffab112aa0d8d2b3acff0c7b619f56a339b9540e2fc4c3e06379-merged.mount: Deactivated successfully.
Nov 25 20:29:31 compute-0 podman[253438]: 2025-11-25 20:29:31.724924754 +0000 UTC m=+1.387744138 container remove e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 25 20:29:31 compute-0 systemd[1]: libpod-conmon-e2dce46515852bef5059c2a226f4f6674138438d70e62829b62f1f75d65d484a.scope: Deactivated successfully.
Nov 25 20:29:31 compute-0 sudo[253331]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:29:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:29:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:29:31 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:29:31 compute-0 sudo[253501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:29:31 compute-0 sudo[253501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:31 compute-0 sudo[253501]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:31 compute-0 sudo[253526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:29:31 compute-0 sudo[253526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:29:31 compute-0 sudo[253526]: pam_unix(sudo:session): session closed for user root
Nov 25 20:29:32 compute-0 nova_compute[248866]: 2025-11-25 20:29:32.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:32 compute-0 ceph-mon[75144]: pgmap v737: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:29:32 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.491386) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102572491438, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1814, "num_deletes": 251, "total_data_size": 1984814, "memory_usage": 2019888, "flush_reason": "Manual Compaction"}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102572507249, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1930810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13691, "largest_seqno": 15504, "table_properties": {"data_size": 1922640, "index_size": 4988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16658, "raw_average_key_size": 19, "raw_value_size": 1906103, "raw_average_value_size": 2263, "num_data_blocks": 230, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102375, "oldest_key_time": 1764102375, "file_creation_time": 1764102572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15915 microseconds, and 9279 cpu microseconds.
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.507306) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1930810 bytes OK
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.507331) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.509480) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.509502) EVENT_LOG_v1 {"time_micros": 1764102572509496, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.509526) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1977097, prev total WAL file size 1977097, number of live WAL files 2.
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.510776) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1885KB)], [35(4713KB)]
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102572510873, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 6757055, "oldest_snapshot_seqno": -1}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3429 keys, 5585743 bytes, temperature: kUnknown
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102572554665, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5585743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5560125, "index_size": 15973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 80908, "raw_average_key_size": 23, "raw_value_size": 5495821, "raw_average_value_size": 1602, "num_data_blocks": 692, "num_entries": 3429, "num_filter_entries": 3429, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.554987) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5585743 bytes
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.556290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.0 rd, 127.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.6 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 3943, records dropped: 514 output_compression: NoCompression
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.556321) EVENT_LOG_v1 {"time_micros": 1764102572556307, "job": 16, "event": "compaction_finished", "compaction_time_micros": 43880, "compaction_time_cpu_micros": 24593, "output_level": 6, "num_output_files": 1, "total_output_size": 5585743, "num_input_records": 3943, "num_output_records": 3429, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102572557193, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102572558605, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.510677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.558725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.558734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.558737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.558740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:29:32 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:29:32.558743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.057 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.057 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.088 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.089 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.089 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.089 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.089 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:29:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v738: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:29:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974066610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.511 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:29:33 compute-0 ceph-mon[75144]: pgmap v738: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:33 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1974066610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.694 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.695 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5274MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.695 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.695 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.815 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.816 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:29:33 compute-0 nova_compute[248866]: 2025-11-25 20:29:33.851 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:29:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:29:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230076545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:29:34 compute-0 nova_compute[248866]: 2025-11-25 20:29:34.309 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:29:34 compute-0 nova_compute[248866]: 2025-11-25 20:29:34.316 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:29:34 compute-0 nova_compute[248866]: 2025-11-25 20:29:34.332 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:29:34 compute-0 nova_compute[248866]: 2025-11-25 20:29:34.335 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:29:34 compute-0 nova_compute[248866]: 2025-11-25 20:29:34.336 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:29:34 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3230076545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:29:34 compute-0 podman[253595]: 2025-11-25 20:29:34.997344363 +0000 UTC m=+0.088515035 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 20:29:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v739: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.321 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.322 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.322 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.344 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.344 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.345 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.345 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:35 compute-0 nova_compute[248866]: 2025-11-25 20:29:35.346 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:29:35 compute-0 ceph-mon[75144]: pgmap v739: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:36 compute-0 nova_compute[248866]: 2025-11-25 20:29:36.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:37 compute-0 nova_compute[248866]: 2025-11-25 20:29:37.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:29:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v740: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:38 compute-0 ceph-mon[75144]: pgmap v740: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:38 compute-0 podman[253614]: 2025-11-25 20:29:38.999260176 +0000 UTC m=+0.093097597 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 20:29:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v741: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:40 compute-0 ceph-mon[75144]: pgmap v741: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v742: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:42 compute-0 ceph-mon[75144]: pgmap v742: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v743: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:44 compute-0 ceph-mon[75144]: pgmap v743: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v744: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:46 compute-0 podman[253635]: 2025-11-25 20:29:46.046261974 +0000 UTC m=+0.135359626 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:29:46 compute-0 ceph-mon[75144]: pgmap v744: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v745: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:48 compute-0 ceph-mon[75144]: pgmap v745: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:29:48.945 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:29:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:29:48.946 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:29:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:29:48.946 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:29:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v746: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:50 compute-0 ceph-mon[75144]: pgmap v746: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v747: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:52 compute-0 ceph-mon[75144]: pgmap v747: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v748: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:54 compute-0 ceph-mon[75144]: pgmap v748: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:29:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v749: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:56 compute-0 ceph-mon[75144]: pgmap v749: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:29:56
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'images', 'backups', 'cephfs.cephfs.data', '.mgr']
Nov 25 20:29:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:29:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v750: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:58 compute-0 sshd-session[253662]: Invalid user jayde from 62.60.131.157 port 62653
Nov 25 20:29:58 compute-0 ceph-mon[75144]: pgmap v750: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:29:58 compute-0 sshd-session[253662]: Received disconnect from 62.60.131.157 port 62653:11: Bye [preauth]
Nov 25 20:29:58 compute-0 sshd-session[253662]: Disconnected from invalid user jayde 62.60.131.157 port 62653 [preauth]
Nov 25 20:29:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v751: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:00 compute-0 ceph-mon[75144]: pgmap v751: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v752: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:30:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:30:02 compute-0 ceph-mon[75144]: pgmap v752: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v753: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:04 compute-0 ceph-mon[75144]: pgmap v753: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v754: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:06 compute-0 podman[253664]: 2025-11-25 20:30:06.007922755 +0000 UTC m=+0.093984201 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:30:06 compute-0 ceph-mon[75144]: pgmap v754: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v755: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:07 compute-0 ceph-mon[75144]: pgmap v755: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v756: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:10 compute-0 podman[253684]: 2025-11-25 20:30:10.030902945 +0000 UTC m=+0.122268790 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:30:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:10 compute-0 ceph-mon[75144]: pgmap v756: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v757: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:12 compute-0 ceph-mon[75144]: pgmap v757: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v758: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:14 compute-0 ceph-mon[75144]: pgmap v758: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v759: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:16 compute-0 ceph-mon[75144]: pgmap v759: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:17 compute-0 podman[253706]: 2025-11-25 20:30:17.027034536 +0000 UTC m=+0.121983332 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 20:30:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:30:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604020846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:30:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:30:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604020846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:30:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v760: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1604020846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:30:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1604020846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:30:18 compute-0 ceph-mon[75144]: pgmap v760: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v761: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:20 compute-0 ceph-mon[75144]: pgmap v761: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v762: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:21 compute-0 ceph-mon[75144]: pgmap v762: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v763: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:24 compute-0 ceph-mon[75144]: pgmap v763: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v764: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:26 compute-0 ceph-mon[75144]: pgmap v764: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:30:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:30:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:30:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:30:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:30:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:30:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v765: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:28 compute-0 ceph-mon[75144]: pgmap v765: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v766: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:31 compute-0 nova_compute[248866]: 2025-11-25 20:30:31.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:31 compute-0 nova_compute[248866]: 2025-11-25 20:30:31.045 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 20:30:31 compute-0 nova_compute[248866]: 2025-11-25 20:30:31.071 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 20:30:31 compute-0 nova_compute[248866]: 2025-11-25 20:30:31.073 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:31 compute-0 nova_compute[248866]: 2025-11-25 20:30:31.073 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 20:30:31 compute-0 nova_compute[248866]: 2025-11-25 20:30:31.099 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:31 compute-0 ceph-mon[75144]: pgmap v766: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v767: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:32 compute-0 sudo[253735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:32 compute-0 sudo[253735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:32 compute-0 sudo[253735]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:32 compute-0 sudo[253760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:30:32 compute-0 sudo[253760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:32 compute-0 sudo[253760]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:32 compute-0 ceph-mon[75144]: pgmap v767: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:32 compute-0 sudo[253785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:32 compute-0 sudo[253785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:32 compute-0 sudo[253785]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:32 compute-0 sudo[253810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:30:32 compute-0 sudo[253810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:32 compute-0 podman[253908]: 2025-11-25 20:30:32.961517265 +0000 UTC m=+0.088384432 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:30:33 compute-0 podman[253908]: 2025-11-25 20:30:33.086142546 +0000 UTC m=+0.213009693 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:30:33 compute-0 nova_compute[248866]: 2025-11-25 20:30:33.112 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:33 compute-0 nova_compute[248866]: 2025-11-25 20:30:33.112 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v768: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:33 compute-0 sudo[253810]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:30:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:30:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:33 compute-0 sudo[254030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:33 compute-0 sudo[254030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:33 compute-0 sudo[254030]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:33 compute-0 sudo[254055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:30:33 compute-0 sudo[254055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:33 compute-0 sudo[254055]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:33 compute-0 sudo[254080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:33 compute-0 sudo[254080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:33 compute-0 sudo[254080]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:33 compute-0 sudo[254105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:30:34 compute-0 sudo[254105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.113 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.114 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.115 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.116 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.116 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:30:34 compute-0 ceph-mon[75144]: pgmap v768: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:34 compute-0 sudo[254105]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2067043829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.616 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:34 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 9c66f850-e11d-4513-a6a0-c501da15c513 does not exist
Nov 25 20:30:34 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d32d8b22-bdef-4722-818d-6d1ea1b7c4b5 does not exist
Nov 25 20:30:34 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 887b8fb8-d2c5-4e5d-94e0-b2484eab7980 does not exist
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:30:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:30:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:30:34 compute-0 sudo[254184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:34 compute-0 sudo[254184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:34 compute-0 sudo[254184]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.801 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.803 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5282MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.803 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.803 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:30:34 compute-0 sudo[254209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:30:34 compute-0 sudo[254209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:34 compute-0 sudo[254209]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.878 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.878 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:30:34 compute-0 nova_compute[248866]: 2025-11-25 20:30:34.892 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:30:34 compute-0 sudo[254234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:34 compute-0 sudo[254234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:34 compute-0 sudo[254234]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:34 compute-0 sudo[254260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:30:34 compute-0 sudo[254260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:30:35 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233187235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:30:35 compute-0 nova_compute[248866]: 2025-11-25 20:30:35.337 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:30:35 compute-0 nova_compute[248866]: 2025-11-25 20:30:35.345 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:30:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v769: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:35 compute-0 nova_compute[248866]: 2025-11-25 20:30:35.364 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:30:35 compute-0 nova_compute[248866]: 2025-11-25 20:30:35.367 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:30:35 compute-0 nova_compute[248866]: 2025-11-25 20:30:35.368 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2067043829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:30:35 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4233187235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.417830317 +0000 UTC m=+0.061929942 container create 2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:30:35 compute-0 systemd[1]: Started libpod-conmon-2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7.scope.
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.392774723 +0000 UTC m=+0.036874408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:30:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.507781399 +0000 UTC m=+0.151881054 container init 2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.518650277 +0000 UTC m=+0.162749912 container start 2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.522950541 +0000 UTC m=+0.167050236 container attach 2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:30:35 compute-0 silly_newton[254362]: 167 167
Nov 25 20:30:35 compute-0 systemd[1]: libpod-2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7.scope: Deactivated successfully.
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.528146459 +0000 UTC m=+0.172246094 container died 2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_newton, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d0649450beb5ffac7e3b9dc676ff1920ec7899ba20c17e96b0200c99c6ef06a-merged.mount: Deactivated successfully.
Nov 25 20:30:35 compute-0 podman[254346]: 2025-11-25 20:30:35.581769319 +0000 UTC m=+0.225868954 container remove 2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_newton, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:30:35 compute-0 systemd[1]: libpod-conmon-2737c11fc0dc05a3e1adaaa49c0e62cc9e4f5e951c9fe9b203ff341599e38df7.scope: Deactivated successfully.
Nov 25 20:30:35 compute-0 podman[254386]: 2025-11-25 20:30:35.819693352 +0000 UTC m=+0.061734157 container create 4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keller, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 20:30:35 compute-0 systemd[1]: Started libpod-conmon-4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4.scope.
Nov 25 20:30:35 compute-0 podman[254386]: 2025-11-25 20:30:35.799283091 +0000 UTC m=+0.041323896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:30:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74a3dfaa98454bfce4940b33620285652bce31e48dcf79d26554738a76d4092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74a3dfaa98454bfce4940b33620285652bce31e48dcf79d26554738a76d4092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74a3dfaa98454bfce4940b33620285652bce31e48dcf79d26554738a76d4092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74a3dfaa98454bfce4940b33620285652bce31e48dcf79d26554738a76d4092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74a3dfaa98454bfce4940b33620285652bce31e48dcf79d26554738a76d4092/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:35 compute-0 podman[254386]: 2025-11-25 20:30:35.930351422 +0000 UTC m=+0.172392277 container init 4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 20:30:35 compute-0 podman[254386]: 2025-11-25 20:30:35.945055973 +0000 UTC m=+0.187096778 container start 4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:30:35 compute-0 podman[254386]: 2025-11-25 20:30:35.950189628 +0000 UTC m=+0.192230423 container attach 4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keller, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.369 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.370 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.371 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.392 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.392 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.393 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.394 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:36 compute-0 nova_compute[248866]: 2025-11-25 20:30:36.394 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:30:36 compute-0 ceph-mon[75144]: pgmap v769: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:37 compute-0 podman[254427]: 2025-11-25 20:30:37.000032556 +0000 UTC m=+0.090252402 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:30:37 compute-0 musing_keller[254403]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:30:37 compute-0 musing_keller[254403]: --> relative data size: 1.0
Nov 25 20:30:37 compute-0 musing_keller[254403]: --> All data devices are unavailable
Nov 25 20:30:37 compute-0 systemd[1]: libpod-4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4.scope: Deactivated successfully.
Nov 25 20:30:37 compute-0 podman[254386]: 2025-11-25 20:30:37.06778384 +0000 UTC m=+1.309824655 container died 4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keller, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:30:37 compute-0 systemd[1]: libpod-4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4.scope: Consumed 1.078s CPU time.
Nov 25 20:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74a3dfaa98454bfce4940b33620285652bce31e48dcf79d26554738a76d4092-merged.mount: Deactivated successfully.
Nov 25 20:30:37 compute-0 podman[254386]: 2025-11-25 20:30:37.149148305 +0000 UTC m=+1.391189100 container remove 4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:30:37 compute-0 systemd[1]: libpod-conmon-4dc99241a41f05250982e990fad293f65799da5008deb690ed94a9e893cc62e4.scope: Deactivated successfully.
Nov 25 20:30:37 compute-0 sudo[254260]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:37 compute-0 sudo[254464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:37 compute-0 sudo[254464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:37 compute-0 sudo[254464]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v770: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:37 compute-0 sudo[254489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:30:37 compute-0 sudo[254489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:37 compute-0 sudo[254489]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:37 compute-0 sudo[254514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:37 compute-0 sudo[254514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:37 compute-0 sudo[254514]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:37 compute-0 sudo[254539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:30:37 compute-0 sudo[254539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:38.017708962 +0000 UTC m=+0.063972585 container create 404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:30:38 compute-0 systemd[1]: Started libpod-conmon-404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f.scope.
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:37.991778085 +0000 UTC m=+0.038041758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:30:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:38.122132448 +0000 UTC m=+0.168396101 container init 404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:38.133400527 +0000 UTC m=+0.179664150 container start 404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:38.137433563 +0000 UTC m=+0.183697186 container attach 404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:30:38 compute-0 pensive_ramanujan[254621]: 167 167
Nov 25 20:30:38 compute-0 systemd[1]: libpod-404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f.scope: Deactivated successfully.
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:38.141538412 +0000 UTC m=+0.187802005 container died 404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-41b51ab7fca6f5e5f2d1bbcd99e512651c0af1b591f5702c93eff128cf7a58b7-merged.mount: Deactivated successfully.
Nov 25 20:30:38 compute-0 podman[254605]: 2025-11-25 20:30:38.191818914 +0000 UTC m=+0.238082537 container remove 404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:30:38 compute-0 systemd[1]: libpod-conmon-404caeaeca26ff58377a6b7dc936a8c5314d0fb1678f80dac075d93c1d1f8f0f.scope: Deactivated successfully.
Nov 25 20:30:38 compute-0 ceph-mon[75144]: pgmap v770: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:38 compute-0 podman[254643]: 2025-11-25 20:30:38.427075796 +0000 UTC m=+0.064743877 container create 2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 20:30:38 compute-0 systemd[1]: Started libpod-conmon-2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b.scope.
Nov 25 20:30:38 compute-0 podman[254643]: 2025-11-25 20:30:38.407194358 +0000 UTC m=+0.044862419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:30:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cba1bacdf9f6eb1b7d4ca82904de176999c8fc173f2c51c6b415c290a43ebdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cba1bacdf9f6eb1b7d4ca82904de176999c8fc173f2c51c6b415c290a43ebdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cba1bacdf9f6eb1b7d4ca82904de176999c8fc173f2c51c6b415c290a43ebdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cba1bacdf9f6eb1b7d4ca82904de176999c8fc173f2c51c6b415c290a43ebdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:38 compute-0 podman[254643]: 2025-11-25 20:30:38.533750601 +0000 UTC m=+0.171418722 container init 2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jemison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:30:38 compute-0 podman[254643]: 2025-11-25 20:30:38.548133782 +0000 UTC m=+0.185801853 container start 2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:30:38 compute-0 podman[254643]: 2025-11-25 20:30:38.552208719 +0000 UTC m=+0.189876800 container attach 2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jemison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:30:39 compute-0 nova_compute[248866]: 2025-11-25 20:30:39.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]: {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:     "0": [
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:         {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "devices": [
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "/dev/loop3"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             ],
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_name": "ceph_lv0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_size": "21470642176",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "name": "ceph_lv0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "tags": {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cluster_name": "ceph",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.crush_device_class": "",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.encrypted": "0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osd_id": "0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.type": "block",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.vdo": "0"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             },
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "type": "block",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "vg_name": "ceph_vg0"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:         }
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:     ],
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:     "1": [
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:         {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "devices": [
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "/dev/loop4"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             ],
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_name": "ceph_lv1",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_size": "21470642176",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "name": "ceph_lv1",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "tags": {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cluster_name": "ceph",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.crush_device_class": "",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.encrypted": "0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osd_id": "1",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.type": "block",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.vdo": "0"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             },
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "type": "block",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "vg_name": "ceph_vg1"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:         }
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:     ],
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:     "2": [
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:         {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "devices": [
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "/dev/loop5"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             ],
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_name": "ceph_lv2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_size": "21470642176",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "name": "ceph_lv2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "tags": {
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.cluster_name": "ceph",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.crush_device_class": "",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.encrypted": "0",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osd_id": "2",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.type": "block",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:                 "ceph.vdo": "0"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             },
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "type": "block",
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:             "vg_name": "ceph_vg2"
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:         }
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]:     ]
Nov 25 20:30:39 compute-0 intelligent_jemison[254660]: }
Nov 25 20:30:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v771: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:39 compute-0 systemd[1]: libpod-2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b.scope: Deactivated successfully.
Nov 25 20:30:39 compute-0 podman[254643]: 2025-11-25 20:30:39.368554503 +0000 UTC m=+1.006222564 container died 2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cba1bacdf9f6eb1b7d4ca82904de176999c8fc173f2c51c6b415c290a43ebdb-merged.mount: Deactivated successfully.
Nov 25 20:30:39 compute-0 podman[254643]: 2025-11-25 20:30:39.452364133 +0000 UTC m=+1.090032224 container remove 2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:30:39 compute-0 systemd[1]: libpod-conmon-2ffb217877a2a91a1ae478b5dccfa22305b842b2ef4fd8389c0b561ac509188b.scope: Deactivated successfully.
Nov 25 20:30:39 compute-0 sudo[254539]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:39 compute-0 sudo[254680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:39 compute-0 sudo[254680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:39 compute-0 sudo[254680]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:39 compute-0 sudo[254705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:30:39 compute-0 sudo[254705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:39 compute-0 sudo[254705]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:39 compute-0 sudo[254730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:39 compute-0 sudo[254730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:39 compute-0 sudo[254730]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:39 compute-0 sudo[254755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:30:39 compute-0 sudo[254755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.355740001 +0000 UTC m=+0.063387520 container create 83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:30:40 compute-0 systemd[1]: Started libpod-conmon-83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377.scope.
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.32891339 +0000 UTC m=+0.036560979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:30:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:30:40 compute-0 ceph-mon[75144]: pgmap v771: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.450590074 +0000 UTC m=+0.158237603 container init 83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.463625439 +0000 UTC m=+0.171272968 container start 83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.467638915 +0000 UTC m=+0.175286484 container attach 83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:30:40 compute-0 kind_moore[254837]: 167 167
Nov 25 20:30:40 compute-0 systemd[1]: libpod-83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377.scope: Deactivated successfully.
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.473214163 +0000 UTC m=+0.180861692 container died 83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc67833008fdc2ee3150c25f6113ad3207072ba4036d44686bcb1ad98adce99-merged.mount: Deactivated successfully.
Nov 25 20:30:40 compute-0 podman[254820]: 2025-11-25 20:30:40.531115387 +0000 UTC m=+0.238762906 container remove 83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moore, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:30:40 compute-0 systemd[1]: libpod-conmon-83c6183b036e3501be9653a725dae5d4c81be471fe67dd9f0c3393cd09fb4377.scope: Deactivated successfully.
Nov 25 20:30:40 compute-0 podman[254834]: 2025-11-25 20:30:40.546873443 +0000 UTC m=+0.136851276 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 20:30:40 compute-0 podman[254878]: 2025-11-25 20:30:40.759767343 +0000 UTC m=+0.070335924 container create 62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:30:40 compute-0 systemd[1]: Started libpod-conmon-62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313.scope.
Nov 25 20:30:40 compute-0 podman[254878]: 2025-11-25 20:30:40.732554932 +0000 UTC m=+0.043123563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:30:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4ae00b71d16dc8e4c5a9b9d14a35cb2eba6526ba2aa2e714411a8bca98656a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4ae00b71d16dc8e4c5a9b9d14a35cb2eba6526ba2aa2e714411a8bca98656a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4ae00b71d16dc8e4c5a9b9d14a35cb2eba6526ba2aa2e714411a8bca98656a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4ae00b71d16dc8e4c5a9b9d14a35cb2eba6526ba2aa2e714411a8bca98656a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:30:40 compute-0 podman[254878]: 2025-11-25 20:30:40.885716449 +0000 UTC m=+0.196285090 container init 62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:30:40 compute-0 podman[254878]: 2025-11-25 20:30:40.898627321 +0000 UTC m=+0.209195922 container start 62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:30:40 compute-0 podman[254878]: 2025-11-25 20:30:40.902530814 +0000 UTC m=+0.213099405 container attach 62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:30:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v772: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:42 compute-0 loving_beaver[254894]: {
Nov 25 20:30:42 compute-0 loving_beaver[254894]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "osd_id": 2,
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "type": "bluestore"
Nov 25 20:30:42 compute-0 loving_beaver[254894]:     },
Nov 25 20:30:42 compute-0 loving_beaver[254894]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "osd_id": 1,
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "type": "bluestore"
Nov 25 20:30:42 compute-0 loving_beaver[254894]:     },
Nov 25 20:30:42 compute-0 loving_beaver[254894]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "osd_id": 0,
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:30:42 compute-0 loving_beaver[254894]:         "type": "bluestore"
Nov 25 20:30:42 compute-0 loving_beaver[254894]:     }
Nov 25 20:30:42 compute-0 loving_beaver[254894]: }
Nov 25 20:30:42 compute-0 systemd[1]: libpod-62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313.scope: Deactivated successfully.
Nov 25 20:30:42 compute-0 podman[254878]: 2025-11-25 20:30:42.048429946 +0000 UTC m=+1.358998527 container died 62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:30:42 compute-0 systemd[1]: libpod-62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313.scope: Consumed 1.159s CPU time.
Nov 25 20:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee4ae00b71d16dc8e4c5a9b9d14a35cb2eba6526ba2aa2e714411a8bca98656a-merged.mount: Deactivated successfully.
Nov 25 20:30:42 compute-0 podman[254878]: 2025-11-25 20:30:42.113012107 +0000 UTC m=+1.423580668 container remove 62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:30:42 compute-0 systemd[1]: libpod-conmon-62de4728a62f7e8df24d93dd553da452c2affc5aa0e95a05f928617e78bf4313.scope: Deactivated successfully.
Nov 25 20:30:42 compute-0 sudo[254755]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:30:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:30:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:42 compute-0 sudo[254942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:30:42 compute-0 sudo[254942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:42 compute-0 sudo[254942]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:42 compute-0 sudo[254967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:30:42 compute-0 sudo[254967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:30:42 compute-0 sudo[254967]: pam_unix(sudo:session): session closed for user root
Nov 25 20:30:42 compute-0 ceph-mon[75144]: pgmap v772: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:42 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:30:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v773: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:44 compute-0 ceph-mon[75144]: pgmap v773: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v774: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:45 compute-0 ceph-mon[75144]: pgmap v774: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v775: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:48 compute-0 podman[254992]: 2025-11-25 20:30:48.045446835 +0000 UTC m=+0.135598352 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 20:30:48 compute-0 ceph-mon[75144]: pgmap v775: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:30:48.946 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:30:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:30:48.947 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:30:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:30:48.947 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:30:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v776: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:50 compute-0 ceph-mon[75144]: pgmap v776: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v777: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:51 compute-0 ceph-mon[75144]: pgmap v777: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v778: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:54 compute-0 ceph-mon[75144]: pgmap v778: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:30:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v779: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:56 compute-0 ceph-mon[75144]: pgmap v779: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:30:56
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'images', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Nov 25 20:30:56 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:30:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v780: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:58 compute-0 ceph-mon[75144]: pgmap v780: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:30:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v781: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:00 compute-0 ceph-mon[75144]: pgmap v781: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v782: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:31:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:31:02 compute-0 ceph-mon[75144]: pgmap v782: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v783: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:04 compute-0 ceph-mon[75144]: pgmap v783: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v784: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:06 compute-0 ceph-mon[75144]: pgmap v784: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v785: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:07 compute-0 podman[255020]: 2025-11-25 20:31:07.997840089 +0000 UTC m=+0.086886643 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:31:08 compute-0 ceph-mon[75144]: pgmap v785: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v786: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:10 compute-0 ceph-mon[75144]: pgmap v786: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:10 compute-0 podman[255037]: 2025-11-25 20:31:10.990543719 +0000 UTC m=+0.084667625 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:31:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v787: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:12 compute-0 ceph-mon[75144]: pgmap v787: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v788: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:14 compute-0 ceph-mon[75144]: pgmap v788: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v789: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:16 compute-0 ceph-mon[75144]: pgmap v789: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:31:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079906060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:31:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:31:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079906060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:31:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v790: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3079906060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:31:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3079906060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:31:18 compute-0 ceph-mon[75144]: pgmap v790: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:19 compute-0 podman[255058]: 2025-11-25 20:31:19.04041226 +0000 UTC m=+0.129101730 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 20:31:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v791: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:20 compute-0 ceph-mon[75144]: pgmap v791: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v792: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:21 compute-0 ceph-mon[75144]: pgmap v792: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v793: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:24 compute-0 ceph-mon[75144]: pgmap v793: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v794: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:26 compute-0 ceph-mon[75144]: pgmap v794: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:31:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:31:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:31:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:31:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:31:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:31:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v795: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:28 compute-0 ceph-mon[75144]: pgmap v795: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v796: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:30 compute-0 ceph-mon[75144]: pgmap v796: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v797: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:32 compute-0 ceph-mon[75144]: pgmap v797: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:33 compute-0 nova_compute[248866]: 2025-11-25 20:31:33.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:33 compute-0 nova_compute[248866]: 2025-11-25 20:31:33.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v798: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:34 compute-0 ceph-mon[75144]: pgmap v798: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:35 compute-0 nova_compute[248866]: 2025-11-25 20:31:35.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.333119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102695333199, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1191, "num_deletes": 251, "total_data_size": 1204797, "memory_usage": 1226632, "flush_reason": "Manual Compaction"}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102695343517, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 724766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15505, "largest_seqno": 16695, "table_properties": {"data_size": 720433, "index_size": 1857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11059, "raw_average_key_size": 20, "raw_value_size": 711005, "raw_average_value_size": 1288, "num_data_blocks": 85, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102573, "oldest_key_time": 1764102573, "file_creation_time": 1764102695, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 10456 microseconds, and 6476 cpu microseconds.
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.343585) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 724766 bytes OK
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.343612) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.345745) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.345770) EVENT_LOG_v1 {"time_micros": 1764102695345761, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.345831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1199377, prev total WAL file size 1199377, number of live WAL files 2.
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.346785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(707KB)], [38(5454KB)]
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102695346879, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 6310509, "oldest_snapshot_seqno": -1}
Nov 25 20:31:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v799: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3522 keys, 4677579 bytes, temperature: kUnknown
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102695382387, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 4677579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4654360, "index_size": 13318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8837, "raw_key_size": 83081, "raw_average_key_size": 23, "raw_value_size": 4591414, "raw_average_value_size": 1303, "num_data_blocks": 579, "num_entries": 3522, "num_filter_entries": 3522, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102695, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.382723) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 4677579 bytes
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.384577) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.3 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 5.3 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(15.2) write-amplify(6.5) OK, records in: 3981, records dropped: 459 output_compression: NoCompression
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.384617) EVENT_LOG_v1 {"time_micros": 1764102695384596, "job": 18, "event": "compaction_finished", "compaction_time_micros": 35597, "compaction_time_cpu_micros": 26536, "output_level": 6, "num_output_files": 1, "total_output_size": 4677579, "num_input_records": 3981, "num_output_records": 3522, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102695385030, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102695386895, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.346680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.386995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.387003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.387007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.387010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:31:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:31:35.387013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.063 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.063 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.085 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.085 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.086 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.087 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.181 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.182 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.182 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.183 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.183 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:31:36 compute-0 ceph-mon[75144]: pgmap v799: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:31:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218210040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.620 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.822 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.824 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5338MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.824 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:31:36 compute-0 nova_compute[248866]: 2025-11-25 20:31:36.824 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.079 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.080 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.176 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing inventories for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.268 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating ProviderTree inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.268 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.293 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing aggregate associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.326 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing trait associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, traits: HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.343 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:31:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v800: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:37 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3218210040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:31:37 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:31:37 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921951903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.843 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.851 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.881 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.883 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:31:37 compute-0 nova_compute[248866]: 2025-11-25 20:31:37.884 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:31:38 compute-0 ceph-mon[75144]: pgmap v800: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:38 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2921951903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:31:38 compute-0 nova_compute[248866]: 2025-11-25 20:31:38.842 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:38 compute-0 nova_compute[248866]: 2025-11-25 20:31:38.842 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:38 compute-0 podman[255129]: 2025-11-25 20:31:38.967201318 +0000 UTC m=+0.065685888 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:31:39 compute-0 nova_compute[248866]: 2025-11-25 20:31:39.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:31:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v801: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:39 compute-0 ceph-mon[75144]: pgmap v801: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v802: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:42 compute-0 podman[255148]: 2025-11-25 20:31:42.005152165 +0000 UTC m=+0.099409192 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:31:42 compute-0 sudo[255168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:42 compute-0 sudo[255168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:42 compute-0 sudo[255168]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:42 compute-0 ceph-mon[75144]: pgmap v802: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:42 compute-0 sudo[255193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:31:42 compute-0 sudo[255193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:42 compute-0 sudo[255193]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:42 compute-0 sudo[255218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:42 compute-0 sudo[255218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:42 compute-0 sudo[255218]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:42 compute-0 sudo[255243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:31:42 compute-0 sudo[255243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:43 compute-0 sudo[255243]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v803: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:31:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:31:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:31:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:31:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev bf54479a-1be2-46c8-bf50-5c1a4224e8dc does not exist
Nov 25 20:31:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev c3f76592-b311-495b-a376-729dcb6d7bc3 does not exist
Nov 25 20:31:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 961665a6-c5e9-4b2a-8f3e-3446f89be4e2 does not exist
Nov 25 20:31:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:31:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:31:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:31:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:31:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:31:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:31:43 compute-0 sudo[255300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:43 compute-0 sudo[255300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:43 compute-0 sudo[255300]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:43 compute-0 sudo[255325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:31:43 compute-0 sudo[255325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:43 compute-0 sudo[255325]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:43 compute-0 sudo[255350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:43 compute-0 sudo[255350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:43 compute-0 sudo[255350]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:43 compute-0 sudo[255375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:31:43 compute-0 sudo[255375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.164463043 +0000 UTC m=+0.056945535 container create e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:31:44 compute-0 systemd[1]: Started libpod-conmon-e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade.scope.
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.135859457 +0000 UTC m=+0.028342009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:31:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.26408742 +0000 UTC m=+0.156569972 container init e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.275332821 +0000 UTC m=+0.167815313 container start e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.279455681 +0000 UTC m=+0.171938233 container attach e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:31:44 compute-0 determined_wescoff[255455]: 167 167
Nov 25 20:31:44 compute-0 systemd[1]: libpod-e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade.scope: Deactivated successfully.
Nov 25 20:31:44 compute-0 conmon[255455]: conmon e704866671506b4edb1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade.scope/container/memory.events
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.284210708 +0000 UTC m=+0.176693200 container died e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wescoff, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-914b0f7886e6c51c8d39d6e50890d13ba250e16d7c99029735e0d219d0548886-merged.mount: Deactivated successfully.
Nov 25 20:31:44 compute-0 podman[255438]: 2025-11-25 20:31:44.333408635 +0000 UTC m=+0.225891107 container remove e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wescoff, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:31:44 compute-0 systemd[1]: libpod-conmon-e704866671506b4edb1fd713f7385aa61f78b4b983d5a67443f1379066f6cade.scope: Deactivated successfully.
Nov 25 20:31:44 compute-0 ceph-mon[75144]: pgmap v803: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:44 compute-0 podman[255477]: 2025-11-25 20:31:44.572205117 +0000 UTC m=+0.061897788 container create 145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:31:44 compute-0 systemd[1]: Started libpod-conmon-145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad.scope.
Nov 25 20:31:44 compute-0 podman[255477]: 2025-11-25 20:31:44.547200868 +0000 UTC m=+0.036893599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:31:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bff529d9fde06d81ddf9a87724ca078a170e92de384eb3aab02f8dce2ef992c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bff529d9fde06d81ddf9a87724ca078a170e92de384eb3aab02f8dce2ef992c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bff529d9fde06d81ddf9a87724ca078a170e92de384eb3aab02f8dce2ef992c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bff529d9fde06d81ddf9a87724ca078a170e92de384eb3aab02f8dce2ef992c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bff529d9fde06d81ddf9a87724ca078a170e92de384eb3aab02f8dce2ef992c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:44 compute-0 podman[255477]: 2025-11-25 20:31:44.673434236 +0000 UTC m=+0.163126937 container init 145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:44 compute-0 podman[255477]: 2025-11-25 20:31:44.688934552 +0000 UTC m=+0.178627233 container start 145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cori, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:31:44 compute-0 podman[255477]: 2025-11-25 20:31:44.692646971 +0000 UTC m=+0.182339662 container attach 145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:31:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v804: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:45 compute-0 quirky_cori[255494]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:31:45 compute-0 quirky_cori[255494]: --> relative data size: 1.0
Nov 25 20:31:45 compute-0 quirky_cori[255494]: --> All data devices are unavailable
Nov 25 20:31:45 compute-0 systemd[1]: libpod-145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad.scope: Deactivated successfully.
Nov 25 20:31:45 compute-0 systemd[1]: libpod-145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad.scope: Consumed 1.172s CPU time.
Nov 25 20:31:45 compute-0 podman[255523]: 2025-11-25 20:31:45.959773928 +0000 UTC m=+0.031482294 container died 145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bff529d9fde06d81ddf9a87724ca078a170e92de384eb3aab02f8dce2ef992c-merged.mount: Deactivated successfully.
Nov 25 20:31:46 compute-0 podman[255523]: 2025-11-25 20:31:46.027666416 +0000 UTC m=+0.099374712 container remove 145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cori, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:31:46 compute-0 systemd[1]: libpod-conmon-145559c0b03938bd15b3046076ceb36cdde7ec0769ac7fe3d69b73f5c40c76ad.scope: Deactivated successfully.
Nov 25 20:31:46 compute-0 sudo[255375]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:46 compute-0 sudo[255538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:46 compute-0 sudo[255538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:46 compute-0 sudo[255538]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:46 compute-0 sudo[255563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:31:46 compute-0 sudo[255563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:46 compute-0 sudo[255563]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:46 compute-0 sudo[255588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:46 compute-0 sudo[255588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:46 compute-0 sudo[255588]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:46 compute-0 sudo[255613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:31:46 compute-0 sudo[255613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:46 compute-0 ceph-mon[75144]: pgmap v804: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:46 compute-0 podman[255679]: 2025-11-25 20:31:46.884884191 +0000 UTC m=+0.067154999 container create d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:46 compute-0 systemd[1]: Started libpod-conmon-d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7.scope.
Nov 25 20:31:46 compute-0 podman[255679]: 2025-11-25 20:31:46.858462073 +0000 UTC m=+0.040732941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:31:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:31:46 compute-0 podman[255679]: 2025-11-25 20:31:46.976735148 +0000 UTC m=+0.159005956 container init d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_aryabhata, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:31:46 compute-0 podman[255679]: 2025-11-25 20:31:46.990689622 +0000 UTC m=+0.172960430 container start d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:31:46 compute-0 podman[255679]: 2025-11-25 20:31:46.995140922 +0000 UTC m=+0.177411740 container attach d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:31:46 compute-0 strange_aryabhata[255696]: 167 167
Nov 25 20:31:46 compute-0 systemd[1]: libpod-d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7.scope: Deactivated successfully.
Nov 25 20:31:46 compute-0 podman[255679]: 2025-11-25 20:31:46.999167719 +0000 UTC m=+0.181438537 container died d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-078088200f61f42785bac338812fea63ad22cf01f10906ad95ee317dcff6c2be-merged.mount: Deactivated successfully.
Nov 25 20:31:47 compute-0 podman[255679]: 2025-11-25 20:31:47.047435682 +0000 UTC m=+0.229706490 container remove d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:47 compute-0 systemd[1]: libpod-conmon-d22ca8e730cadfcf17be5583bb238adb4965ea5282044951af8c0a0bc338e4f7.scope: Deactivated successfully.
Nov 25 20:31:47 compute-0 podman[255718]: 2025-11-25 20:31:47.298354587 +0000 UTC m=+0.088999922 container create 01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:31:47 compute-0 podman[255718]: 2025-11-25 20:31:47.251991037 +0000 UTC m=+0.042636422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:31:47 compute-0 systemd[1]: Started libpod-conmon-01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c.scope.
Nov 25 20:31:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac6d26921cda4e7fd482a477c64f11dcd27ed043164403ab4b44c28c0aea577/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac6d26921cda4e7fd482a477c64f11dcd27ed043164403ab4b44c28c0aea577/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac6d26921cda4e7fd482a477c64f11dcd27ed043164403ab4b44c28c0aea577/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac6d26921cda4e7fd482a477c64f11dcd27ed043164403ab4b44c28c0aea577/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v805: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:47 compute-0 podman[255718]: 2025-11-25 20:31:47.39441888 +0000 UTC m=+0.185064225 container init 01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:47 compute-0 podman[255718]: 2025-11-25 20:31:47.409200945 +0000 UTC m=+0.199846290 container start 01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:31:47 compute-0 podman[255718]: 2025-11-25 20:31:47.413316035 +0000 UTC m=+0.203961550 container attach 01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:48 compute-0 zen_benz[255734]: {
Nov 25 20:31:48 compute-0 zen_benz[255734]:     "0": [
Nov 25 20:31:48 compute-0 zen_benz[255734]:         {
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "devices": [
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "/dev/loop3"
Nov 25 20:31:48 compute-0 zen_benz[255734]:             ],
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_name": "ceph_lv0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_size": "21470642176",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "name": "ceph_lv0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "tags": {
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cluster_name": "ceph",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.crush_device_class": "",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.encrypted": "0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osd_id": "0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.type": "block",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.vdo": "0"
Nov 25 20:31:48 compute-0 zen_benz[255734]:             },
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "type": "block",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "vg_name": "ceph_vg0"
Nov 25 20:31:48 compute-0 zen_benz[255734]:         }
Nov 25 20:31:48 compute-0 zen_benz[255734]:     ],
Nov 25 20:31:48 compute-0 zen_benz[255734]:     "1": [
Nov 25 20:31:48 compute-0 zen_benz[255734]:         {
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "devices": [
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "/dev/loop4"
Nov 25 20:31:48 compute-0 zen_benz[255734]:             ],
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_name": "ceph_lv1",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_size": "21470642176",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "name": "ceph_lv1",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "tags": {
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cluster_name": "ceph",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.crush_device_class": "",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.encrypted": "0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osd_id": "1",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.type": "block",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.vdo": "0"
Nov 25 20:31:48 compute-0 zen_benz[255734]:             },
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "type": "block",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "vg_name": "ceph_vg1"
Nov 25 20:31:48 compute-0 zen_benz[255734]:         }
Nov 25 20:31:48 compute-0 zen_benz[255734]:     ],
Nov 25 20:31:48 compute-0 zen_benz[255734]:     "2": [
Nov 25 20:31:48 compute-0 zen_benz[255734]:         {
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "devices": [
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "/dev/loop5"
Nov 25 20:31:48 compute-0 zen_benz[255734]:             ],
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_name": "ceph_lv2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_size": "21470642176",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "name": "ceph_lv2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "tags": {
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.cluster_name": "ceph",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.crush_device_class": "",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.encrypted": "0",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osd_id": "2",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.type": "block",
Nov 25 20:31:48 compute-0 zen_benz[255734]:                 "ceph.vdo": "0"
Nov 25 20:31:48 compute-0 zen_benz[255734]:             },
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "type": "block",
Nov 25 20:31:48 compute-0 zen_benz[255734]:             "vg_name": "ceph_vg2"
Nov 25 20:31:48 compute-0 zen_benz[255734]:         }
Nov 25 20:31:48 compute-0 zen_benz[255734]:     ]
Nov 25 20:31:48 compute-0 zen_benz[255734]: }
Nov 25 20:31:48 compute-0 systemd[1]: libpod-01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c.scope: Deactivated successfully.
Nov 25 20:31:48 compute-0 podman[255718]: 2025-11-25 20:31:48.173184535 +0000 UTC m=+0.963829860 container died 01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ac6d26921cda4e7fd482a477c64f11dcd27ed043164403ab4b44c28c0aea577-merged.mount: Deactivated successfully.
Nov 25 20:31:48 compute-0 podman[255718]: 2025-11-25 20:31:48.25297473 +0000 UTC m=+1.043620035 container remove 01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:31:48 compute-0 systemd[1]: libpod-conmon-01d2ff8b6b687677c414494409751718bea06423f3ba3b2a1008fcd2544be82c.scope: Deactivated successfully.
Nov 25 20:31:48 compute-0 sudo[255613]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:48 compute-0 sudo[255758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:48 compute-0 sudo[255758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:48 compute-0 sudo[255758]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:48 compute-0 sudo[255783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:31:48 compute-0 sudo[255783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:48 compute-0 sudo[255783]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:48 compute-0 ceph-mon[75144]: pgmap v805: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:48 compute-0 sudo[255808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:48 compute-0 sudo[255808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:48 compute-0 sudo[255808]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:48 compute-0 sudo[255833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:31:48 compute-0 sudo[255833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:31:48.947 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:31:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:31:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:31:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:31:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.105716735 +0000 UTC m=+0.056880863 container create 6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:31:49 compute-0 systemd[1]: Started libpod-conmon-6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e.scope.
Nov 25 20:31:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.085224947 +0000 UTC m=+0.036389065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.18806826 +0000 UTC m=+0.139232448 container init 6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.198041266 +0000 UTC m=+0.149205374 container start 6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.202745483 +0000 UTC m=+0.153909611 container attach 6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:31:49 compute-0 nice_volhard[255916]: 167 167
Nov 25 20:31:49 compute-0 systemd[1]: libpod-6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e.scope: Deactivated successfully.
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.20564985 +0000 UTC m=+0.156813978 container died 6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-35fca04ef1e7701934c0ee814429e7c046f8899eeb574524dc2f2d04a57dcc23-merged.mount: Deactivated successfully.
Nov 25 20:31:49 compute-0 podman[255899]: 2025-11-25 20:31:49.258105895 +0000 UTC m=+0.209270013 container remove 6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:31:49 compute-0 systemd[1]: libpod-conmon-6577114a81d4817a2ea1f210bf68b7d485eb60fe77815f1d1bf07a7e01a05f7e.scope: Deactivated successfully.
Nov 25 20:31:49 compute-0 podman[255913]: 2025-11-25 20:31:49.293637566 +0000 UTC m=+0.139392632 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:31:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v806: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:49 compute-0 podman[255962]: 2025-11-25 20:31:49.474302511 +0000 UTC m=+0.048890069 container create bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:31:49 compute-0 systemd[1]: Started libpod-conmon-bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e.scope.
Nov 25 20:31:49 compute-0 podman[255962]: 2025-11-25 20:31:49.451670906 +0000 UTC m=+0.026258504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:31:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8f70986e67af774cd0ac9a287ea3882d7fcf49c6a5acd7ad311b1bd1b0b77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8f70986e67af774cd0ac9a287ea3882d7fcf49c6a5acd7ad311b1bd1b0b77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8f70986e67af774cd0ac9a287ea3882d7fcf49c6a5acd7ad311b1bd1b0b77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8f70986e67af774cd0ac9a287ea3882d7fcf49c6a5acd7ad311b1bd1b0b77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:31:49 compute-0 podman[255962]: 2025-11-25 20:31:49.572173861 +0000 UTC m=+0.146761449 container init bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:31:49 compute-0 podman[255962]: 2025-11-25 20:31:49.580043472 +0000 UTC m=+0.154631040 container start bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:31:49 compute-0 podman[255962]: 2025-11-25 20:31:49.583589737 +0000 UTC m=+0.158177335 container attach bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:31:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:50 compute-0 ceph-mon[75144]: pgmap v806: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]: {
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "osd_id": 2,
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "type": "bluestore"
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:     },
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "osd_id": 1,
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "type": "bluestore"
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:     },
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "osd_id": 0,
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:         "type": "bluestore"
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]:     }
Nov 25 20:31:50 compute-0 vigilant_antonelli[255978]: }
Nov 25 20:31:50 compute-0 systemd[1]: libpod-bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e.scope: Deactivated successfully.
Nov 25 20:31:50 compute-0 podman[255962]: 2025-11-25 20:31:50.691725669 +0000 UTC m=+1.266313287 container died bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:31:50 compute-0 systemd[1]: libpod-bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e.scope: Consumed 1.120s CPU time.
Nov 25 20:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf8f70986e67af774cd0ac9a287ea3882d7fcf49c6a5acd7ad311b1bd1b0b77-merged.mount: Deactivated successfully.
Nov 25 20:31:50 compute-0 podman[255962]: 2025-11-25 20:31:50.767753883 +0000 UTC m=+1.342341441 container remove bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 20:31:50 compute-0 systemd[1]: libpod-conmon-bea17cb3828dc978e4a6c063a294a92e0bd3a008da53d5f19d2194e74f0b2d2e.scope: Deactivated successfully.
Nov 25 20:31:50 compute-0 sudo[255833]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:31:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:31:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:31:50 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:31:50 compute-0 sudo[256025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:31:50 compute-0 sudo[256025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:50 compute-0 sudo[256025]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:51 compute-0 sudo[256050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:31:51 compute-0 sudo[256050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:31:51 compute-0 sudo[256050]: pam_unix(sudo:session): session closed for user root
Nov 25 20:31:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v807: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:51 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:31:51 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:31:51 compute-0 ceph-mon[75144]: pgmap v807: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v808: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:54 compute-0 ceph-mon[75144]: pgmap v808: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:31:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v809: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:56 compute-0 ceph-mon[75144]: pgmap v809: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:31:56 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:31:56
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', '.mgr', 'backups', 'volumes']
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:31:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v810: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:58 compute-0 ceph-mon[75144]: pgmap v810: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:31:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v811: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:00 compute-0 ceph-mon[75144]: pgmap v811: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v812: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:32:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:32:02 compute-0 ceph-mon[75144]: pgmap v812: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v813: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:04 compute-0 ceph-mon[75144]: pgmap v813: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v814: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:06 compute-0 ceph-mon[75144]: pgmap v814: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v815: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:08 compute-0 ceph-mon[75144]: pgmap v815: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v816: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:10 compute-0 podman[256076]: 2025-11-25 20:32:10.02444782 +0000 UTC m=+0.106802160 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 20:32:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:10 compute-0 sshd-session[256075]: banner exchange: Connection from 222.170.171.206 port 49712: invalid format
Nov 25 20:32:10 compute-0 ceph-mon[75144]: pgmap v816: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v817: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:12 compute-0 ceph-mon[75144]: pgmap v817: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:13 compute-0 podman[256095]: 2025-11-25 20:32:13.001455976 +0000 UTC m=+0.091562631 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:32:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v818: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:13 compute-0 ceph-mon[75144]: pgmap v818: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v819: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:16 compute-0 ceph-mon[75144]: pgmap v819: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:32:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573839149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:32:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:32:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573839149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:32:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v820: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2573839149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:32:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2573839149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:32:18 compute-0 ceph-mon[75144]: pgmap v820: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v821: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:20 compute-0 podman[256116]: 2025-11-25 20:32:20.053710844 +0000 UTC m=+0.152449452 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 20:32:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:20 compute-0 ceph-mon[75144]: pgmap v821: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v822: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:22 compute-0 ceph-mon[75144]: pgmap v822: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v823: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:24 compute-0 ceph-mon[75144]: pgmap v823: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v824: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:26 compute-0 ceph-mon[75144]: pgmap v824: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:32:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:32:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:32:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:32:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:32:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:32:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v825: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:28 compute-0 ceph-mon[75144]: pgmap v825: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v826: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:30 compute-0 ceph-mon[75144]: pgmap v826: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v827: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:32 compute-0 ceph-mon[75144]: pgmap v827: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v828: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:34 compute-0 nova_compute[248866]: 2025-11-25 20:32:34.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:34 compute-0 nova_compute[248866]: 2025-11-25 20:32:34.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:34 compute-0 ceph-mon[75144]: pgmap v828: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v829: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:36 compute-0 nova_compute[248866]: 2025-11-25 20:32:36.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:36 compute-0 nova_compute[248866]: 2025-11-25 20:32:36.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:32:36 compute-0 ceph-mon[75144]: pgmap v829: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:37 compute-0 nova_compute[248866]: 2025-11-25 20:32:37.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:37 compute-0 nova_compute[248866]: 2025-11-25 20:32:37.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v830: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:37 compute-0 ceph-mon[75144]: pgmap v830: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.061 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.093 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.094 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.094 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.095 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.095 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:32:38 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:32:38 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2189852238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.559 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:32:38 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2189852238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.828 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.831 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5338MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.832 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.833 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.903 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.904 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:32:38 compute-0 nova_compute[248866]: 2025-11-25 20:32:38.929 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:32:39 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:32:39 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377628004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:32:39 compute-0 nova_compute[248866]: 2025-11-25 20:32:39.369 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:32:39 compute-0 nova_compute[248866]: 2025-11-25 20:32:39.377 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:32:39 compute-0 nova_compute[248866]: 2025-11-25 20:32:39.397 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:32:39 compute-0 nova_compute[248866]: 2025-11-25 20:32:39.400 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:32:39 compute-0 nova_compute[248866]: 2025-11-25 20:32:39.400 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:32:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v831: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:39 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3377628004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:32:39 compute-0 ceph-mon[75144]: pgmap v831: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:40 compute-0 nova_compute[248866]: 2025-11-25 20:32:40.381 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:40 compute-0 nova_compute[248866]: 2025-11-25 20:32:40.382 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:32:40 compute-0 podman[256185]: 2025-11-25 20:32:40.989317076 +0000 UTC m=+0.081047120 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 20:32:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v832: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:42 compute-0 ceph-mon[75144]: pgmap v832: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v833: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:43 compute-0 podman[256204]: 2025-11-25 20:32:43.980880531 +0000 UTC m=+0.082571871 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 20:32:44 compute-0 ceph-mon[75144]: pgmap v833: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v834: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:46 compute-0 ceph-mon[75144]: pgmap v834: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v835: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:48 compute-0 ceph-mon[75144]: pgmap v835: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:32:48.948 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:32:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:32:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:32:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:32:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:32:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v836: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:49 compute-0 ceph-mon[75144]: pgmap v836: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:51 compute-0 podman[256225]: 2025-11-25 20:32:51.048587501 +0000 UTC m=+0.137392668 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:32:51 compute-0 sudo[256253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:51 compute-0 sudo[256253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:51 compute-0 sudo[256253]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:51 compute-0 sudo[256278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:32:51 compute-0 sudo[256278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:51 compute-0 sudo[256278]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:51 compute-0 sudo[256303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:51 compute-0 sudo[256303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:51 compute-0 sudo[256303]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:51 compute-0 sudo[256328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:32:51 compute-0 sudo[256328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v837: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:52 compute-0 sudo[256328]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:32:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:32:52 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:32:52 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:32:52 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 1ce05d31-e1c2-4f5b-bf74-fd8a6cbf3520 does not exist
Nov 25 20:32:52 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 3dd6747f-013b-4ee6-adc4-ad445cd4d4c1 does not exist
Nov 25 20:32:52 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 734032d8-9260-477c-8587-176956c53681 does not exist
Nov 25 20:32:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:32:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:32:52 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:32:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:32:52 compute-0 sudo[256384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:52 compute-0 sudo[256384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:52 compute-0 sudo[256384]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:52 compute-0 sudo[256409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:32:52 compute-0 sudo[256409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:52 compute-0 sudo[256409]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:52 compute-0 sudo[256434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:52 compute-0 sudo[256434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:52 compute-0 sudo[256434]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:52 compute-0 sudo[256459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:32:52 compute-0 sudo[256459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:52 compute-0 ceph-mon[75144]: pgmap v837: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:32:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:32:52 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.814943302 +0000 UTC m=+0.067368145 container create 241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:32:52 compute-0 systemd[1]: Started libpod-conmon-241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db.scope.
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.787643721 +0000 UTC m=+0.040068624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:32:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.911540277 +0000 UTC m=+0.163965170 container init 241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.924918835 +0000 UTC m=+0.177343688 container start 241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.9288275 +0000 UTC m=+0.181252403 container attach 241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:32:52 compute-0 hungry_curie[256541]: 167 167
Nov 25 20:32:52 compute-0 systemd[1]: libpod-241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db.scope: Deactivated successfully.
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.933946637 +0000 UTC m=+0.186371480 container died 241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c90c03303bda9b73eb4a46427821f3049c882cd2394f843983fa0c0073a9ffdd-merged.mount: Deactivated successfully.
Nov 25 20:32:52 compute-0 podman[256524]: 2025-11-25 20:32:52.9833668 +0000 UTC m=+0.235791643 container remove 241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:32:53 compute-0 systemd[1]: libpod-conmon-241992e4bbdc7cf63dde2147e7009bb0c756c57c5608d3552a237e099303c3db.scope: Deactivated successfully.
Nov 25 20:32:53 compute-0 podman[256565]: 2025-11-25 20:32:53.246008799 +0000 UTC m=+0.071414322 container create bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:32:53 compute-0 systemd[1]: Started libpod-conmon-bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3.scope.
Nov 25 20:32:53 compute-0 podman[256565]: 2025-11-25 20:32:53.217162097 +0000 UTC m=+0.042567660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:32:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6145acafa66e9470c5d8c2e86d7fae0707c5cfbb2c015fa850b02284c1a44f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6145acafa66e9470c5d8c2e86d7fae0707c5cfbb2c015fa850b02284c1a44f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6145acafa66e9470c5d8c2e86d7fae0707c5cfbb2c015fa850b02284c1a44f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6145acafa66e9470c5d8c2e86d7fae0707c5cfbb2c015fa850b02284c1a44f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6145acafa66e9470c5d8c2e86d7fae0707c5cfbb2c015fa850b02284c1a44f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:53 compute-0 podman[256565]: 2025-11-25 20:32:53.351650957 +0000 UTC m=+0.177056480 container init bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:32:53 compute-0 podman[256565]: 2025-11-25 20:32:53.365468627 +0000 UTC m=+0.190874140 container start bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 20:32:53 compute-0 podman[256565]: 2025-11-25 20:32:53.370222745 +0000 UTC m=+0.195628258 container attach bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:32:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v838: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:54 compute-0 ceph-mon[75144]: pgmap v838: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:54 compute-0 elated_blackwell[256581]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:32:54 compute-0 elated_blackwell[256581]: --> relative data size: 1.0
Nov 25 20:32:54 compute-0 elated_blackwell[256581]: --> All data devices are unavailable
Nov 25 20:32:54 compute-0 systemd[1]: libpod-bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3.scope: Deactivated successfully.
Nov 25 20:32:54 compute-0 podman[256565]: 2025-11-25 20:32:54.650678409 +0000 UTC m=+1.476083932 container died bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 20:32:54 compute-0 systemd[1]: libpod-bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3.scope: Consumed 1.228s CPU time.
Nov 25 20:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6145acafa66e9470c5d8c2e86d7fae0707c5cfbb2c015fa850b02284c1a44f4-merged.mount: Deactivated successfully.
Nov 25 20:32:54 compute-0 podman[256565]: 2025-11-25 20:32:54.730058333 +0000 UTC m=+1.555463856 container remove bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:32:54 compute-0 systemd[1]: libpod-conmon-bef732699dc8f11ec33323ab11641b02ef73c1734c4f7e4a72e6d7335eebdaf3.scope: Deactivated successfully.
Nov 25 20:32:54 compute-0 sudo[256459]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:54 compute-0 sudo[256624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:54 compute-0 sudo[256624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:54 compute-0 sudo[256624]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:54 compute-0 sudo[256649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:32:54 compute-0 sudo[256649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:54 compute-0 sudo[256649]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:55 compute-0 sudo[256674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:55 compute-0 sudo[256674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:55 compute-0 sudo[256674]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:55 compute-0 sudo[256699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:32:55 compute-0 sudo[256699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:32:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v839: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.598387756 +0000 UTC m=+0.065925275 container create 9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:32:55 compute-0 systemd[1]: Started libpod-conmon-9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d.scope.
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.571412323 +0000 UTC m=+0.038949882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:32:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.698064174 +0000 UTC m=+0.165601753 container init 9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.705210605 +0000 UTC m=+0.172748094 container start 9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.709025718 +0000 UTC m=+0.176563297 container attach 9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:32:55 compute-0 reverent_torvalds[256780]: 167 167
Nov 25 20:32:55 compute-0 systemd[1]: libpod-9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d.scope: Deactivated successfully.
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.714378851 +0000 UTC m=+0.181916370 container died 9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-68b9397db7ca7e1e2690e187bdc06d9b5a281f6520031b32ca0678c6a3d79a99-merged.mount: Deactivated successfully.
Nov 25 20:32:55 compute-0 podman[256764]: 2025-11-25 20:32:55.763068004 +0000 UTC m=+0.230605493 container remove 9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:32:55 compute-0 systemd[1]: libpod-conmon-9c1c6d8be28a92717f894a0d9e865a160af506895c662ce384ffc0156998656d.scope: Deactivated successfully.
Nov 25 20:32:55 compute-0 podman[256805]: 2025-11-25 20:32:55.978745507 +0000 UTC m=+0.055288861 container create 9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:32:56 compute-0 systemd[1]: Started libpod-conmon-9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca.scope.
Nov 25 20:32:56 compute-0 podman[256805]: 2025-11-25 20:32:55.957834397 +0000 UTC m=+0.034377811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:32:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a301d8f30c92ead9ccc2d69bd5c3ba686a9c6e374327158162a45e9311eeb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a301d8f30c92ead9ccc2d69bd5c3ba686a9c6e374327158162a45e9311eeb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a301d8f30c92ead9ccc2d69bd5c3ba686a9c6e374327158162a45e9311eeb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a301d8f30c92ead9ccc2d69bd5c3ba686a9c6e374327158162a45e9311eeb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:56 compute-0 podman[256805]: 2025-11-25 20:32:56.091047823 +0000 UTC m=+0.167591167 container init 9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:32:56 compute-0 podman[256805]: 2025-11-25 20:32:56.098552794 +0000 UTC m=+0.175096118 container start 9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:32:56 compute-0 podman[256805]: 2025-11-25 20:32:56.102054117 +0000 UTC m=+0.178597431 container attach 9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:32:56 compute-0 ceph-mon[75144]: pgmap v839: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:32:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:32:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:32:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:32:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:32:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:32:56 compute-0 priceless_keller[256822]: {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:     "0": [
Nov 25 20:32:56 compute-0 priceless_keller[256822]:         {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "devices": [
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "/dev/loop3"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             ],
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_name": "ceph_lv0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_size": "21470642176",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "name": "ceph_lv0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "tags": {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cluster_name": "ceph",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.crush_device_class": "",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.encrypted": "0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osd_id": "0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.type": "block",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.vdo": "0"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             },
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "type": "block",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "vg_name": "ceph_vg0"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:         }
Nov 25 20:32:56 compute-0 priceless_keller[256822]:     ],
Nov 25 20:32:56 compute-0 priceless_keller[256822]:     "1": [
Nov 25 20:32:56 compute-0 priceless_keller[256822]:         {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "devices": [
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "/dev/loop4"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             ],
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_name": "ceph_lv1",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_size": "21470642176",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "name": "ceph_lv1",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "tags": {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cluster_name": "ceph",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.crush_device_class": "",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.encrypted": "0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osd_id": "1",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.type": "block",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.vdo": "0"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             },
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "type": "block",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "vg_name": "ceph_vg1"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:         }
Nov 25 20:32:56 compute-0 priceless_keller[256822]:     ],
Nov 25 20:32:56 compute-0 priceless_keller[256822]:     "2": [
Nov 25 20:32:56 compute-0 priceless_keller[256822]:         {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "devices": [
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "/dev/loop5"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             ],
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_name": "ceph_lv2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_size": "21470642176",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "name": "ceph_lv2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "tags": {
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.cluster_name": "ceph",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.crush_device_class": "",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.encrypted": "0",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osd_id": "2",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.type": "block",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:                 "ceph.vdo": "0"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             },
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "type": "block",
Nov 25 20:32:56 compute-0 priceless_keller[256822]:             "vg_name": "ceph_vg2"
Nov 25 20:32:56 compute-0 priceless_keller[256822]:         }
Nov 25 20:32:56 compute-0 priceless_keller[256822]:     ]
Nov 25 20:32:56 compute-0 priceless_keller[256822]: }
Nov 25 20:32:56 compute-0 systemd[1]: libpod-9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca.scope: Deactivated successfully.
Nov 25 20:32:56 compute-0 podman[256805]: 2025-11-25 20:32:56.918136541 +0000 UTC m=+0.994679905 container died 9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-97a301d8f30c92ead9ccc2d69bd5c3ba686a9c6e374327158162a45e9311eeb0-merged.mount: Deactivated successfully.
Nov 25 20:32:56 compute-0 podman[256805]: 2025-11-25 20:32:56.987331184 +0000 UTC m=+1.063874518 container remove 9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:32:56 compute-0 systemd[1]: libpod-conmon-9713bdd39faefc53f4e1d40236dcdce60ab276e806a98e665e9793025acd65ca.scope: Deactivated successfully.
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:32:57
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes']
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:32:57 compute-0 sudo[256699]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:57 compute-0 sudo[256844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:57 compute-0 sudo[256844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:57 compute-0 sudo[256844]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:57 compute-0 sudo[256869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:32:57 compute-0 sudo[256869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:57 compute-0 sudo[256869]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:57 compute-0 sudo[256894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:57 compute-0 sudo[256894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:57 compute-0 sudo[256894]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v840: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:57 compute-0 sudo[256919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:32:57 compute-0 sudo[256919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:57 compute-0 podman[256983]: 2025-11-25 20:32:57.990270939 +0000 UTC m=+0.077014053 container create 72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 20:32:58 compute-0 systemd[1]: Started libpod-conmon-72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331.scope.
Nov 25 20:32:58 compute-0 podman[256983]: 2025-11-25 20:32:57.954716497 +0000 UTC m=+0.041459641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:32:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:32:58 compute-0 podman[256983]: 2025-11-25 20:32:58.084763079 +0000 UTC m=+0.171506233 container init 72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:32:58 compute-0 podman[256983]: 2025-11-25 20:32:58.096046971 +0000 UTC m=+0.182790065 container start 72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:32:58 compute-0 podman[256983]: 2025-11-25 20:32:58.100392927 +0000 UTC m=+0.187136031 container attach 72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mccarthy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:32:58 compute-0 objective_mccarthy[256999]: 167 167
Nov 25 20:32:58 compute-0 systemd[1]: libpod-72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331.scope: Deactivated successfully.
Nov 25 20:32:58 compute-0 podman[256983]: 2025-11-25 20:32:58.10649481 +0000 UTC m=+0.193237904 container died 72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-40fc72799f55885953d6960b0868ab3fca4746f89edd7c306d674637f3d1e6ba-merged.mount: Deactivated successfully.
Nov 25 20:32:58 compute-0 podman[256983]: 2025-11-25 20:32:58.162356515 +0000 UTC m=+0.249099619 container remove 72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:32:58 compute-0 systemd[1]: libpod-conmon-72517de4e5f596712a01f2487a51fe21b584be35f370901b127585ae181b9331.scope: Deactivated successfully.
Nov 25 20:32:58 compute-0 podman[257022]: 2025-11-25 20:32:58.393356219 +0000 UTC m=+0.073499278 container create bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:32:58 compute-0 systemd[1]: Started libpod-conmon-bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536.scope.
Nov 25 20:32:58 compute-0 podman[257022]: 2025-11-25 20:32:58.365596126 +0000 UTC m=+0.045739245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:32:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a869cb8b814e0c55d6fe762694529c1025471db42246791b70d205ff3f01106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a869cb8b814e0c55d6fe762694529c1025471db42246791b70d205ff3f01106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a869cb8b814e0c55d6fe762694529c1025471db42246791b70d205ff3f01106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a869cb8b814e0c55d6fe762694529c1025471db42246791b70d205ff3f01106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:32:58 compute-0 podman[257022]: 2025-11-25 20:32:58.516954587 +0000 UTC m=+0.197097706 container init bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:32:58 compute-0 ceph-mon[75144]: pgmap v840: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:58 compute-0 podman[257022]: 2025-11-25 20:32:58.534949828 +0000 UTC m=+0.215092887 container start bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:32:58 compute-0 podman[257022]: 2025-11-25 20:32:58.539899041 +0000 UTC m=+0.220042110 container attach bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:32:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v841: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]: {
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "osd_id": 2,
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "type": "bluestore"
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:     },
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "osd_id": 1,
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "type": "bluestore"
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:     },
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "osd_id": 0,
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:         "type": "bluestore"
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]:     }
Nov 25 20:32:59 compute-0 flamboyant_bell[257038]: }
Nov 25 20:32:59 compute-0 systemd[1]: libpod-bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536.scope: Deactivated successfully.
Nov 25 20:32:59 compute-0 systemd[1]: libpod-bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536.scope: Consumed 1.158s CPU time.
Nov 25 20:32:59 compute-0 podman[257022]: 2025-11-25 20:32:59.683378008 +0000 UTC m=+1.363521077 container died bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a869cb8b814e0c55d6fe762694529c1025471db42246791b70d205ff3f01106-merged.mount: Deactivated successfully.
Nov 25 20:32:59 compute-0 podman[257022]: 2025-11-25 20:32:59.75892547 +0000 UTC m=+1.439068539 container remove bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:32:59 compute-0 systemd[1]: libpod-conmon-bc8f86aa36d2368138da220fc77211957c60dda8b21bae89e1e3cde7c5254536.scope: Deactivated successfully.
Nov 25 20:32:59 compute-0 sudo[256919]: pam_unix(sudo:session): session closed for user root
Nov 25 20:32:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:32:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:32:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:32:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:32:59 compute-0 sudo[257084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:32:59 compute-0 sudo[257084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:32:59 compute-0 sudo[257084]: pam_unix(sudo:session): session closed for user root
Nov 25 20:33:00 compute-0 sudo[257109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:33:00 compute-0 sudo[257109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:33:00 compute-0 sudo[257109]: pam_unix(sudo:session): session closed for user root
Nov 25 20:33:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:00 compute-0 ceph-mon[75144]: pgmap v841: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:33:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:33:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v842: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:33:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:33:02 compute-0 ceph-mon[75144]: pgmap v842: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v843: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:04 compute-0 ceph-mon[75144]: pgmap v843: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v844: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:06 compute-0 ceph-mon[75144]: pgmap v844: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v845: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:08 compute-0 ceph-mon[75144]: pgmap v845: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v846: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:09 compute-0 ceph-mon[75144]: pgmap v846: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v847: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:12 compute-0 podman[257134]: 2025-11-25 20:33:12.025361045 +0000 UTC m=+0.109383678 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 20:33:12 compute-0 ceph-mon[75144]: pgmap v847: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v848: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:14 compute-0 ceph-mon[75144]: pgmap v848: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:15 compute-0 podman[257155]: 2025-11-25 20:33:15.000139662 +0000 UTC m=+0.093958537 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 20:33:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v849: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:16 compute-0 ceph-mon[75144]: pgmap v849: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:33:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3082538793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:33:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:33:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3082538793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:33:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v850: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3082538793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:33:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3082538793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:33:18 compute-0 ceph-mon[75144]: pgmap v850: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v851: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:20 compute-0 ceph-mon[75144]: pgmap v851: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v852: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.554007) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102801554052, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1103, "num_deletes": 251, "total_data_size": 1082198, "memory_usage": 1103480, "flush_reason": "Manual Compaction"}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102801564585, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1058341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16696, "largest_seqno": 17798, "table_properties": {"data_size": 1053020, "index_size": 2781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11295, "raw_average_key_size": 19, "raw_value_size": 1042335, "raw_average_value_size": 1803, "num_data_blocks": 127, "num_entries": 578, "num_filter_entries": 578, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102696, "oldest_key_time": 1764102696, "file_creation_time": 1764102801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 10645 microseconds, and 6598 cpu microseconds.
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.564651) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1058341 bytes OK
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.564678) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.566967) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.566993) EVENT_LOG_v1 {"time_micros": 1764102801566985, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.567019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1077082, prev total WAL file size 1077082, number of live WAL files 2.
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.567897) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1033KB)], [41(4567KB)]
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102801567999, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 5735920, "oldest_snapshot_seqno": -1}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3586 keys, 4557251 bytes, temperature: kUnknown
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102801604778, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4557251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4533827, "index_size": 13331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9029, "raw_key_size": 84949, "raw_average_key_size": 23, "raw_value_size": 4469919, "raw_average_value_size": 1246, "num_data_blocks": 576, "num_entries": 3586, "num_filter_entries": 3586, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.605306) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4557251 bytes
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.607120) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.1 rd, 123.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 4.5 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4100, records dropped: 514 output_compression: NoCompression
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.607151) EVENT_LOG_v1 {"time_micros": 1764102801607136, "job": 20, "event": "compaction_finished", "compaction_time_micros": 36985, "compaction_time_cpu_micros": 27466, "output_level": 6, "num_output_files": 1, "total_output_size": 4557251, "num_input_records": 4100, "num_output_records": 3586, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102801607919, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102801610689, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.567732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.610883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.610892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.610895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.610898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:33:21 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:33:21.610901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:33:22 compute-0 podman[257176]: 2025-11-25 20:33:22.043906911 +0000 UTC m=+0.133652518 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:33:22 compute-0 ceph-mon[75144]: pgmap v852: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v853: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:24 compute-0 ceph-mon[75144]: pgmap v853: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v854: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:26 compute-0 ceph-mon[75144]: pgmap v854: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:33:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:33:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:33:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:33:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:33:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:33:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v855: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:27 compute-0 ceph-mon[75144]: pgmap v855: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v856: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:30 compute-0 ceph-mon[75144]: pgmap v856: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v857: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:33 compute-0 ceph-mon[75144]: pgmap v857: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v858: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:34 compute-0 ceph-mon[75144]: pgmap v858: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:35 compute-0 nova_compute[248866]: 2025-11-25 20:33:35.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:35 compute-0 nova_compute[248866]: 2025-11-25 20:33:35.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v859: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:36 compute-0 nova_compute[248866]: 2025-11-25 20:33:36.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:36 compute-0 nova_compute[248866]: 2025-11-25 20:33:36.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:33:36 compute-0 ceph-mon[75144]: pgmap v859: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v860: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:38 compute-0 nova_compute[248866]: 2025-11-25 20:33:38.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:38 compute-0 ceph-mon[75144]: pgmap v860: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:39 compute-0 nova_compute[248866]: 2025-11-25 20:33:39.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:39 compute-0 nova_compute[248866]: 2025-11-25 20:33:39.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v861: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.059 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.060 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.100 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.101 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.101 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.102 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.102 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:33:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:40 compute-0 ceph-mon[75144]: pgmap v861: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:33:40 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740620004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.563 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.820 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.822 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5310MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.822 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.823 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.906 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.906 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:33:40 compute-0 nova_compute[248866]: 2025-11-25 20:33:40.932 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:33:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:33:41 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82751978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:33:41 compute-0 nova_compute[248866]: 2025-11-25 20:33:41.400 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:33:41 compute-0 nova_compute[248866]: 2025-11-25 20:33:41.409 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:33:41 compute-0 nova_compute[248866]: 2025-11-25 20:33:41.439 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:33:41 compute-0 nova_compute[248866]: 2025-11-25 20:33:41.442 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:33:41 compute-0 nova_compute[248866]: 2025-11-25 20:33:41.442 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:33:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v862: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:41 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1740620004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:33:41 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/82751978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:33:42 compute-0 nova_compute[248866]: 2025-11-25 20:33:42.425 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:42 compute-0 nova_compute[248866]: 2025-11-25 20:33:42.426 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:33:42 compute-0 ceph-mon[75144]: pgmap v862: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:43 compute-0 podman[257248]: 2025-11-25 20:33:43.031527286 +0000 UTC m=+0.118815246 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 20:33:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v863: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:44 compute-0 ceph-mon[75144]: pgmap v863: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v864: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:46 compute-0 podman[257268]: 2025-11-25 20:33:46.003123122 +0000 UTC m=+0.088507776 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 25 20:33:46 compute-0 ceph-mon[75144]: pgmap v864: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v865: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:48 compute-0 ceph-mon[75144]: pgmap v865: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:33:48.948 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:33:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:33:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:33:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:33:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:33:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v866: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:50 compute-0 ceph-mon[75144]: pgmap v866: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v867: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:52 compute-0 ceph-mon[75144]: pgmap v867: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:52 compute-0 podman[257289]: 2025-11-25 20:33:52.991138524 +0000 UTC m=+0.089450592 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 20:33:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v868: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:53 compute-0 ceph-mon[75144]: pgmap v868: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:33:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v869: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:56 compute-0 ceph-mon[75144]: pgmap v869: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:33:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:33:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:33:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:33:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:33:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:33:57
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', 'images']
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:33:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v870: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:58 compute-0 ceph-mon[75144]: pgmap v870: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:33:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v871: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:00 compute-0 sudo[257316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:00 compute-0 sudo[257316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:00 compute-0 sudo[257316]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:00 compute-0 sudo[257341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:34:00 compute-0 sudo[257341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:00 compute-0 sudo[257341]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:00 compute-0 sudo[257366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:00 compute-0 sudo[257366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:00 compute-0 sudo[257366]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.363744) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102840363789, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 534, "num_deletes": 255, "total_data_size": 356228, "memory_usage": 366472, "flush_reason": "Manual Compaction"}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102840370509, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 351704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17799, "largest_seqno": 18332, "table_properties": {"data_size": 348747, "index_size": 929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6295, "raw_average_key_size": 17, "raw_value_size": 342979, "raw_average_value_size": 947, "num_data_blocks": 43, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102802, "oldest_key_time": 1764102802, "file_creation_time": 1764102840, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 6826 microseconds, and 4157 cpu microseconds.
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.370570) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 351704 bytes OK
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.370599) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.372691) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.372713) EVENT_LOG_v1 {"time_micros": 1764102840372706, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.372743) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 353195, prev total WAL file size 353195, number of live WAL files 2.
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.373468) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(343KB)], [44(4450KB)]
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102840373532, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 4908955, "oldest_snapshot_seqno": -1}
Nov 25 20:34:00 compute-0 sudo[257391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:34:00 compute-0 sudo[257391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 3431 keys, 4815275 bytes, temperature: kUnknown
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102840414667, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 4815275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4791700, "index_size": 13866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 82852, "raw_average_key_size": 24, "raw_value_size": 4729337, "raw_average_value_size": 1378, "num_data_blocks": 596, "num_entries": 3431, "num_filter_entries": 3431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764102840, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.415031) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 4815275 bytes
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.416791) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.1 rd, 116.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 4.3 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(27.6) write-amplify(13.7) OK, records in: 3948, records dropped: 517 output_compression: NoCompression
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.416850) EVENT_LOG_v1 {"time_micros": 1764102840416836, "job": 22, "event": "compaction_finished", "compaction_time_micros": 41230, "compaction_time_cpu_micros": 27867, "output_level": 6, "num_output_files": 1, "total_output_size": 4815275, "num_input_records": 3948, "num_output_records": 3431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102840417266, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764102840418826, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.373327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.418912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.418920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.418922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.418924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:34:00 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:34:00.418925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:34:00 compute-0 ceph-mon[75144]: pgmap v871: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:00 compute-0 sudo[257391]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:34:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:34:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:34:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:34:01 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 62f6d2ca-ad22-40ca-ae03-e508eee614d8 does not exist
Nov 25 20:34:01 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 08448533-982e-4ba5-9ddc-1d9e0ed6ac1a does not exist
Nov 25 20:34:01 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 52713355-3db7-47de-b12b-ce3b1288abe3 does not exist
Nov 25 20:34:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:34:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:34:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:34:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:34:01 compute-0 sudo[257447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:01 compute-0 sudo[257447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:01 compute-0 sudo[257447]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:01 compute-0 sudo[257472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:34:01 compute-0 sudo[257472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:01 compute-0 sudo[257472]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:01 compute-0 sudo[257497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:01 compute-0 sudo[257497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:01 compute-0 sudo[257497]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:01 compute-0 sudo[257522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:34:01 compute-0 sudo[257522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v872: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:34:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:34:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:34:01 compute-0 podman[257588]: 2025-11-25 20:34:01.930304455 +0000 UTC m=+0.073124154 container create 4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:34:01 compute-0 systemd[1]: Started libpod-conmon-4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b.scope.
Nov 25 20:34:01 compute-0 podman[257588]: 2025-11-25 20:34:01.901775604 +0000 UTC m=+0.044595343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:34:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:34:02 compute-0 podman[257588]: 2025-11-25 20:34:02.034511671 +0000 UTC m=+0.177331380 container init 4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:34:02 compute-0 podman[257588]: 2025-11-25 20:34:02.048236515 +0000 UTC m=+0.191056204 container start 4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lamarr, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:34:02 compute-0 podman[257588]: 2025-11-25 20:34:02.052755543 +0000 UTC m=+0.195575252 container attach 4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:34:02 compute-0 admiring_lamarr[257604]: 167 167
Nov 25 20:34:02 compute-0 systemd[1]: libpod-4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b.scope: Deactivated successfully.
Nov 25 20:34:02 compute-0 podman[257588]: 2025-11-25 20:34:02.058439402 +0000 UTC m=+0.201259071 container died 4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9d85e248886a4924d92cc1b705219d3f1649f4cd3a00fd58241d877344aca63-merged.mount: Deactivated successfully.
Nov 25 20:34:02 compute-0 podman[257588]: 2025-11-25 20:34:02.114399143 +0000 UTC m=+0.257218812 container remove 4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lamarr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:34:02 compute-0 systemd[1]: libpod-conmon-4fa899f9dcf9e72cfd631eb6dd0fe47ccb5761bd9512a57dd674f62474b3ae4b.scope: Deactivated successfully.
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:34:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:34:02 compute-0 podman[257626]: 2025-11-25 20:34:02.359081411 +0000 UTC m=+0.066773944 container create 1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:34:02 compute-0 systemd[1]: Started libpod-conmon-1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d.scope.
Nov 25 20:34:02 compute-0 podman[257626]: 2025-11-25 20:34:02.330619053 +0000 UTC m=+0.038311546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:34:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc8f6096c0e6ef1763de98857d6ddb9334ca71e977f7390e7eea4efffd43f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc8f6096c0e6ef1763de98857d6ddb9334ca71e977f7390e7eea4efffd43f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc8f6096c0e6ef1763de98857d6ddb9334ca71e977f7390e7eea4efffd43f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc8f6096c0e6ef1763de98857d6ddb9334ca71e977f7390e7eea4efffd43f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc8f6096c0e6ef1763de98857d6ddb9334ca71e977f7390e7eea4efffd43f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:02 compute-0 podman[257626]: 2025-11-25 20:34:02.486251682 +0000 UTC m=+0.193944265 container init 1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:34:02 compute-0 podman[257626]: 2025-11-25 20:34:02.501426497 +0000 UTC m=+0.209119020 container start 1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:34:02 compute-0 podman[257626]: 2025-11-25 20:34:02.505736489 +0000 UTC m=+0.213429072 container attach 1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:34:02 compute-0 ceph-mon[75144]: pgmap v872: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v873: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:03 compute-0 stoic_nightingale[257642]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:34:03 compute-0 stoic_nightingale[257642]: --> relative data size: 1.0
Nov 25 20:34:03 compute-0 stoic_nightingale[257642]: --> All data devices are unavailable
Nov 25 20:34:03 compute-0 systemd[1]: libpod-1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d.scope: Deactivated successfully.
Nov 25 20:34:03 compute-0 podman[257626]: 2025-11-25 20:34:03.606616152 +0000 UTC m=+1.314308685 container died 1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 25 20:34:03 compute-0 systemd[1]: libpod-1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d.scope: Consumed 1.069s CPU time.
Nov 25 20:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-58bc8f6096c0e6ef1763de98857d6ddb9334ca71e977f7390e7eea4efffd43f2-merged.mount: Deactivated successfully.
Nov 25 20:34:03 compute-0 podman[257626]: 2025-11-25 20:34:03.680002712 +0000 UTC m=+1.387695225 container remove 1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:34:03 compute-0 systemd[1]: libpod-conmon-1319763588ea85a30af9cf813b21c992d8d6f636f4c83d29126b61c4722e4c4d.scope: Deactivated successfully.
Nov 25 20:34:03 compute-0 sudo[257522]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:03 compute-0 sudo[257683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:03 compute-0 sudo[257683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:03 compute-0 sudo[257683]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:03 compute-0 sudo[257708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:34:03 compute-0 sudo[257708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:03 compute-0 sudo[257708]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:03 compute-0 sudo[257733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:03 compute-0 sudo[257733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:04 compute-0 sudo[257733]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:04 compute-0 sudo[257758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:34:04 compute-0 sudo[257758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:04 compute-0 ceph-mon[75144]: pgmap v873: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.559019197 +0000 UTC m=+0.072384083 container create 3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:34:04 compute-0 systemd[1]: Started libpod-conmon-3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514.scope.
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.527657377 +0000 UTC m=+0.041022303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:34:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.65816315 +0000 UTC m=+0.171528076 container init 3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.668992964 +0000 UTC m=+0.182357840 container start 3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.67315107 +0000 UTC m=+0.186515966 container attach 3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:34:04 compute-0 zen_jemison[257840]: 167 167
Nov 25 20:34:04 compute-0 systemd[1]: libpod-3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514.scope: Deactivated successfully.
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.678766978 +0000 UTC m=+0.192131894 container died 3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3f06b440adabefee2c031775f9c27f09959653b24235145bed8dd102dc05dda-merged.mount: Deactivated successfully.
Nov 25 20:34:04 compute-0 podman[257824]: 2025-11-25 20:34:04.73048804 +0000 UTC m=+0.243852926 container remove 3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:34:04 compute-0 systemd[1]: libpod-conmon-3c44f4d22ddca9a694c02002829316a5bd054290c01184d7c9c738d849318514.scope: Deactivated successfully.
Nov 25 20:34:04 compute-0 podman[257864]: 2025-11-25 20:34:04.98059426 +0000 UTC m=+0.070067527 container create 792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:34:05 compute-0 systemd[1]: Started libpod-conmon-792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c.scope.
Nov 25 20:34:05 compute-0 podman[257864]: 2025-11-25 20:34:04.95134353 +0000 UTC m=+0.040816857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:34:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a7fa0c430b1abf42144ab39c3c4a108b7e1bfa061316aabb1d66f271fa48bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a7fa0c430b1abf42144ab39c3c4a108b7e1bfa061316aabb1d66f271fa48bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a7fa0c430b1abf42144ab39c3c4a108b7e1bfa061316aabb1d66f271fa48bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a7fa0c430b1abf42144ab39c3c4a108b7e1bfa061316aabb1d66f271fa48bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:05 compute-0 podman[257864]: 2025-11-25 20:34:05.082556283 +0000 UTC m=+0.172029550 container init 792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keldysh, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:34:05 compute-0 podman[257864]: 2025-11-25 20:34:05.096277409 +0000 UTC m=+0.185750646 container start 792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:34:05 compute-0 podman[257864]: 2025-11-25 20:34:05.100318902 +0000 UTC m=+0.189792159 container attach 792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:34:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v874: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:05 compute-0 competent_keldysh[257880]: {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:     "0": [
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:         {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "devices": [
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "/dev/loop3"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             ],
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_name": "ceph_lv0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_size": "21470642176",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "name": "ceph_lv0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "tags": {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cluster_name": "ceph",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.crush_device_class": "",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.encrypted": "0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osd_id": "0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.type": "block",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.vdo": "0"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             },
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "type": "block",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "vg_name": "ceph_vg0"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:         }
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:     ],
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:     "1": [
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:         {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "devices": [
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "/dev/loop4"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             ],
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_name": "ceph_lv1",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_size": "21470642176",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "name": "ceph_lv1",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "tags": {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cluster_name": "ceph",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.crush_device_class": "",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.encrypted": "0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osd_id": "1",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.type": "block",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.vdo": "0"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             },
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "type": "block",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "vg_name": "ceph_vg1"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:         }
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:     ],
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:     "2": [
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:         {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "devices": [
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "/dev/loop5"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             ],
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_name": "ceph_lv2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_size": "21470642176",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "name": "ceph_lv2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "tags": {
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.cluster_name": "ceph",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.crush_device_class": "",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.encrypted": "0",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osd_id": "2",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.type": "block",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:                 "ceph.vdo": "0"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             },
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "type": "block",
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:             "vg_name": "ceph_vg2"
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:         }
Nov 25 20:34:05 compute-0 competent_keldysh[257880]:     ]
Nov 25 20:34:05 compute-0 competent_keldysh[257880]: }
Nov 25 20:34:05 compute-0 systemd[1]: libpod-792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c.scope: Deactivated successfully.
Nov 25 20:34:05 compute-0 podman[257864]: 2025-11-25 20:34:05.947738039 +0000 UTC m=+1.037211346 container died 792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keldysh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a7fa0c430b1abf42144ab39c3c4a108b7e1bfa061316aabb1d66f271fa48bf-merged.mount: Deactivated successfully.
Nov 25 20:34:06 compute-0 podman[257864]: 2025-11-25 20:34:06.039338131 +0000 UTC m=+1.128811398 container remove 792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:34:06 compute-0 systemd[1]: libpod-conmon-792c0cd0bfcc4eec8f8ea4af7638a9fd37aa16bc1f76ecc32bb0463fd6531e0c.scope: Deactivated successfully.
Nov 25 20:34:06 compute-0 sudo[257758]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:06 compute-0 sudo[257905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:06 compute-0 sudo[257905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:06 compute-0 sudo[257905]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:06 compute-0 sudo[257930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:34:06 compute-0 sudo[257930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:06 compute-0 sudo[257930]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:06 compute-0 sudo[257955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:06 compute-0 sudo[257955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:06 compute-0 sudo[257955]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:06 compute-0 sudo[257980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:34:06 compute-0 sudo[257980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:06 compute-0 ceph-mon[75144]: pgmap v874: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:06 compute-0 podman[258046]: 2025-11-25 20:34:06.952781292 +0000 UTC m=+0.062544286 container create 29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:34:06 compute-0 systemd[1]: Started libpod-conmon-29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee.scope.
Nov 25 20:34:07 compute-0 podman[258046]: 2025-11-25 20:34:06.926746652 +0000 UTC m=+0.036509716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:34:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:34:07 compute-0 podman[258046]: 2025-11-25 20:34:07.056527294 +0000 UTC m=+0.166290318 container init 29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:34:07 compute-0 podman[258046]: 2025-11-25 20:34:07.067318007 +0000 UTC m=+0.177081031 container start 29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:34:07 compute-0 podman[258046]: 2025-11-25 20:34:07.072964606 +0000 UTC m=+0.182727620 container attach 29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:34:07 compute-0 youthful_lichterman[258062]: 167 167
Nov 25 20:34:07 compute-0 systemd[1]: libpod-29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee.scope: Deactivated successfully.
Nov 25 20:34:07 compute-0 podman[258046]: 2025-11-25 20:34:07.07594565 +0000 UTC m=+0.185708664 container died 29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0869e206a5b56c1f694cc3543d4819018bd87319711500bf092868bf808078db-merged.mount: Deactivated successfully.
Nov 25 20:34:07 compute-0 podman[258046]: 2025-11-25 20:34:07.128056463 +0000 UTC m=+0.237819457 container remove 29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:34:07 compute-0 systemd[1]: libpod-conmon-29e793a3d7787e6cbd100c68e8074aa1f8dedf215562845ee6703d61a4d63aee.scope: Deactivated successfully.
Nov 25 20:34:07 compute-0 podman[258087]: 2025-11-25 20:34:07.378886183 +0000 UTC m=+0.067206687 container create f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:34:07 compute-0 systemd[1]: Started libpod-conmon-f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4.scope.
Nov 25 20:34:07 compute-0 podman[258087]: 2025-11-25 20:34:07.351586078 +0000 UTC m=+0.039906632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:34:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v875: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f0ee3c064604678a2fa5e078b38f0c7726085a58f3cfba829678816bd9ed2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f0ee3c064604678a2fa5e078b38f0c7726085a58f3cfba829678816bd9ed2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f0ee3c064604678a2fa5e078b38f0c7726085a58f3cfba829678816bd9ed2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f0ee3c064604678a2fa5e078b38f0c7726085a58f3cfba829678816bd9ed2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:34:07 compute-0 podman[258087]: 2025-11-25 20:34:07.496703001 +0000 UTC m=+0.185023565 container init f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:34:07 compute-0 podman[258087]: 2025-11-25 20:34:07.510621601 +0000 UTC m=+0.198942115 container start f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:34:07 compute-0 podman[258087]: 2025-11-25 20:34:07.515639482 +0000 UTC m=+0.203960046 container attach f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:34:08 compute-0 jolly_hertz[258103]: {
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "osd_id": 2,
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "type": "bluestore"
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:     },
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "osd_id": 1,
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "type": "bluestore"
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:     },
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "osd_id": 0,
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:         "type": "bluestore"
Nov 25 20:34:08 compute-0 jolly_hertz[258103]:     }
Nov 25 20:34:08 compute-0 jolly_hertz[258103]: }
Nov 25 20:34:08 compute-0 systemd[1]: libpod-f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4.scope: Deactivated successfully.
Nov 25 20:34:08 compute-0 systemd[1]: libpod-f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4.scope: Consumed 1.094s CPU time.
Nov 25 20:34:08 compute-0 podman[258087]: 2025-11-25 20:34:08.592665906 +0000 UTC m=+1.280986410 container died f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:34:08 compute-0 ceph-mon[75144]: pgmap v875: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f0ee3c064604678a2fa5e078b38f0c7726085a58f3cfba829678816bd9ed2c-merged.mount: Deactivated successfully.
Nov 25 20:34:08 compute-0 podman[258087]: 2025-11-25 20:34:08.705273417 +0000 UTC m=+1.393593941 container remove f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:34:08 compute-0 systemd[1]: libpod-conmon-f2d95fe7d602ce5d807c22df2e78b80a463f391861dc26517eca6c6ef08776b4.scope: Deactivated successfully.
Nov 25 20:34:08 compute-0 sudo[257980]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:34:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:34:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:34:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:34:08 compute-0 sudo[258149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:34:08 compute-0 sudo[258149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:08 compute-0 sudo[258149]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:08 compute-0 sudo[258174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:34:08 compute-0 sudo[258174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:34:08 compute-0 sudo[258174]: pam_unix(sudo:session): session closed for user root
Nov 25 20:34:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v876: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:34:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:34:09 compute-0 ceph-mon[75144]: pgmap v876: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v877: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:12 compute-0 ceph-mon[75144]: pgmap v877: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v878: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:14 compute-0 podman[258199]: 2025-11-25 20:34:14.002165687 +0000 UTC m=+0.093256610 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 20:34:14 compute-0 ceph-mon[75144]: pgmap v878: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v879: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:16 compute-0 ceph-mon[75144]: pgmap v879: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:17 compute-0 podman[258217]: 2025-11-25 20:34:17.002374856 +0000 UTC m=+0.095024139 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 20:34:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:34:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3840852960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:34:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:34:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3840852960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:34:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v880: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3840852960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:34:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3840852960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:34:18 compute-0 ceph-mon[75144]: pgmap v880: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v881: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:20 compute-0 ceph-mon[75144]: pgmap v881: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v882: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:22 compute-0 ceph-mon[75144]: pgmap v882: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v883: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:24 compute-0 podman[258237]: 2025-11-25 20:34:24.027181808 +0000 UTC m=+0.128481747 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 20:34:24 compute-0 ceph-mon[75144]: pgmap v883: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v884: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:26 compute-0 ceph-mon[75144]: pgmap v884: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:34:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:34:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:34:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:34:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:34:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:34:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v885: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:27 compute-0 ceph-mon[75144]: pgmap v885: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v886: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:30 compute-0 ceph-mon[75144]: pgmap v886: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v887: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:31 compute-0 ceph-mon[75144]: pgmap v887: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v888: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:34 compute-0 ceph-mon[75144]: pgmap v888: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:35 compute-0 nova_compute[248866]: 2025-11-25 20:34:35.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:35 compute-0 nova_compute[248866]: 2025-11-25 20:34:35.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v889: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:36 compute-0 ceph-mon[75144]: pgmap v889: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:37 compute-0 nova_compute[248866]: 2025-11-25 20:34:37.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:37 compute-0 nova_compute[248866]: 2025-11-25 20:34:37.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:34:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v890: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:38 compute-0 ceph-mon[75144]: pgmap v890: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:39 compute-0 nova_compute[248866]: 2025-11-25 20:34:39.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:39 compute-0 nova_compute[248866]: 2025-11-25 20:34:39.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v891: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:40 compute-0 ceph-mon[75144]: pgmap v891: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.065 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.065 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.097 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.098 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.099 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.099 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.099 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:34:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v892: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:34:41 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688810948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.554 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:34:41 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/688810948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.684 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.685 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5315MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.685 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.685 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.770 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.771 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:34:41 compute-0 nova_compute[248866]: 2025-11-25 20:34:41.789 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:34:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:34:42 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476263478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:34:42 compute-0 nova_compute[248866]: 2025-11-25 20:34:42.241 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:34:42 compute-0 nova_compute[248866]: 2025-11-25 20:34:42.248 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:34:42 compute-0 nova_compute[248866]: 2025-11-25 20:34:42.274 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:34:42 compute-0 nova_compute[248866]: 2025-11-25 20:34:42.276 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:34:42 compute-0 nova_compute[248866]: 2025-11-25 20:34:42.277 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:34:42 compute-0 ceph-mon[75144]: pgmap v892: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:42 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1476263478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:34:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v893: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:44 compute-0 nova_compute[248866]: 2025-11-25 20:34:44.254 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:44 compute-0 nova_compute[248866]: 2025-11-25 20:34:44.255 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:34:44 compute-0 ceph-mon[75144]: pgmap v893: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:44 compute-0 podman[258307]: 2025-11-25 20:34:44.973141609 +0000 UTC m=+0.065548731 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 20:34:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v894: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:46 compute-0 ceph-mon[75144]: pgmap v894: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v895: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:48 compute-0 podman[258327]: 2025-11-25 20:34:48.004435112 +0000 UTC m=+0.098877237 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 20:34:48 compute-0 ceph-mon[75144]: pgmap v895: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:34:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:34:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:34:48.949 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:34:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:34:48.950 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:34:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v896: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:50 compute-0 ceph-mon[75144]: pgmap v896: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v897: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:51 compute-0 ceph-mon[75144]: pgmap v897: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v898: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:54 compute-0 ceph-mon[75144]: pgmap v898: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:55 compute-0 podman[258347]: 2025-11-25 20:34:55.023125144 +0000 UTC m=+0.113068005 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:34:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:34:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v899: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:56 compute-0 ceph-mon[75144]: pgmap v899: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:34:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:34:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:34:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:34:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:34:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:34:57
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr']
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:34:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v900: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:58 compute-0 ceph-mon[75144]: pgmap v900: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:34:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v901: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:00 compute-0 ceph-mon[75144]: pgmap v901: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v902: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:35:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:35:02 compute-0 ceph-mon[75144]: pgmap v902: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v903: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:04 compute-0 ceph-mon[75144]: pgmap v903: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v904: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:06 compute-0 ceph-mon[75144]: pgmap v904: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v905: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:08 compute-0 ceph-mon[75144]: pgmap v905: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:09 compute-0 sudo[258374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:09 compute-0 sudo[258374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:09 compute-0 sudo[258374]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:09 compute-0 sudo[258399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:35:09 compute-0 sudo[258399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:09 compute-0 sudo[258399]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:09 compute-0 sudo[258424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:09 compute-0 sudo[258424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:09 compute-0 sudo[258424]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:09 compute-0 sudo[258449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:35:09 compute-0 sudo[258449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v906: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:09 compute-0 ceph-mon[75144]: pgmap v906: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:09 compute-0 sudo[258449]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:35:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:35:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:35:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:35:10 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev db83ca1c-5920-47c0-8ab8-04c1a327885c does not exist
Nov 25 20:35:10 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 6c8c4313-79a3-4936-9402-925268a7f3bb does not exist
Nov 25 20:35:10 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 25c25373-5926-403d-869d-f41d4dc279a9 does not exist
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:35:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:35:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:35:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:35:10 compute-0 sudo[258504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:10 compute-0 sudo[258504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:10 compute-0 sudo[258504]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:10 compute-0 sudo[258529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:35:10 compute-0 sudo[258529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:10 compute-0 sudo[258529]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:10 compute-0 sudo[258554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:10 compute-0 sudo[258554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:10 compute-0 sudo[258554]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:10 compute-0 sudo[258579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:35:10 compute-0 sudo[258579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:35:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:35:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:35:10 compute-0 podman[258643]: 2025-11-25 20:35:10.830886167 +0000 UTC m=+0.068830094 container create bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 20:35:10 compute-0 systemd[1]: Started libpod-conmon-bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3.scope.
Nov 25 20:35:10 compute-0 podman[258643]: 2025-11-25 20:35:10.801466501 +0000 UTC m=+0.039410428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:35:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:35:10 compute-0 podman[258643]: 2025-11-25 20:35:10.940192316 +0000 UTC m=+0.178136193 container init bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:35:10 compute-0 podman[258643]: 2025-11-25 20:35:10.950092513 +0000 UTC m=+0.188036350 container start bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:35:10 compute-0 podman[258643]: 2025-11-25 20:35:10.953656803 +0000 UTC m=+0.191600690 container attach bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 20:35:10 compute-0 competent_chatterjee[258659]: 167 167
Nov 25 20:35:10 compute-0 systemd[1]: libpod-bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3.scope: Deactivated successfully.
Nov 25 20:35:10 compute-0 podman[258643]: 2025-11-25 20:35:10.960390102 +0000 UTC m=+0.198333959 container died bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e22790b640df5628e5e73bce9cff31eb399f0f711b7168815afbe2203dd35da2-merged.mount: Deactivated successfully.
Nov 25 20:35:11 compute-0 podman[258643]: 2025-11-25 20:35:11.003503542 +0000 UTC m=+0.241447379 container remove bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:35:11 compute-0 systemd[1]: libpod-conmon-bdae44a077d149664690005962503f6ac7f8f21be85eb392c01648bd152bbfa3.scope: Deactivated successfully.
Nov 25 20:35:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:35:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4278 writes, 18K keys, 4278 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4278 writes, 4278 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1310 writes, 6200 keys, 1310 commit groups, 1.0 writes per commit group, ingest: 5.65 MB, 0.01 MB/s
                                           Interval WAL: 1310 writes, 1310 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.0      0.17              0.08        11    0.016       0      0       0.0       0.0
                                             L6      1/0    4.59 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   3.0     90.1     73.2      0.63              0.25        10    0.063     36K   5307       0.0       0.0
                                            Sum      1/0    4.59 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0     70.8     76.5      0.80              0.33        21    0.038     36K   5307       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2     54.9     55.6      0.52              0.16        10    0.052     20K   3027       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     90.1     73.2      0.63              0.25        10    0.063     36K   5307       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.3      0.17              0.08        10    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.03 MB/s write, 0.06 GB read, 0.03 MB/s read, 0.8 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585aba031f0#2 capacity: 308.00 MB usage: 4.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(413,4.54 MB,1.47508%) FilterBlock(22,105.92 KB,0.0335842%) IndexBlock(22,196.05 KB,0.0621597%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:35:11 compute-0 podman[258683]: 2025-11-25 20:35:11.282208326 +0000 UTC m=+0.079614536 container create 700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:35:11 compute-0 podman[258683]: 2025-11-25 20:35:11.246842993 +0000 UTC m=+0.044249253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:35:11 compute-0 systemd[1]: Started libpod-conmon-700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6.scope.
Nov 25 20:35:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f1584aa7fa29409c7a7be971b3f904d3a99b0498e8e6a62d14828396a79b24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f1584aa7fa29409c7a7be971b3f904d3a99b0498e8e6a62d14828396a79b24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f1584aa7fa29409c7a7be971b3f904d3a99b0498e8e6a62d14828396a79b24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f1584aa7fa29409c7a7be971b3f904d3a99b0498e8e6a62d14828396a79b24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f1584aa7fa29409c7a7be971b3f904d3a99b0498e8e6a62d14828396a79b24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:11 compute-0 podman[258683]: 2025-11-25 20:35:11.401518695 +0000 UTC m=+0.198924955 container init 700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:35:11 compute-0 podman[258683]: 2025-11-25 20:35:11.417975257 +0000 UTC m=+0.215381477 container start 700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:35:11 compute-0 podman[258683]: 2025-11-25 20:35:11.422676269 +0000 UTC m=+0.220082489 container attach 700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:35:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v907: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:11 compute-0 ceph-mon[75144]: pgmap v907: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:12 compute-0 elated_hypatia[258699]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:35:12 compute-0 elated_hypatia[258699]: --> relative data size: 1.0
Nov 25 20:35:12 compute-0 elated_hypatia[258699]: --> All data devices are unavailable
Nov 25 20:35:12 compute-0 systemd[1]: libpod-700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6.scope: Deactivated successfully.
Nov 25 20:35:12 compute-0 systemd[1]: libpod-700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6.scope: Consumed 1.145s CPU time.
Nov 25 20:35:12 compute-0 podman[258728]: 2025-11-25 20:35:12.648236531 +0000 UTC m=+0.032264106 container died 700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-87f1584aa7fa29409c7a7be971b3f904d3a99b0498e8e6a62d14828396a79b24-merged.mount: Deactivated successfully.
Nov 25 20:35:12 compute-0 podman[258728]: 2025-11-25 20:35:12.728269928 +0000 UTC m=+0.112297443 container remove 700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:35:12 compute-0 systemd[1]: libpod-conmon-700c98ce9575291f06c82455c01910adfee7750431fda0f0078ba16ed16860b6.scope: Deactivated successfully.
Nov 25 20:35:12 compute-0 sudo[258579]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:12 compute-0 sudo[258743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:12 compute-0 sudo[258743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:12 compute-0 sudo[258743]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:12 compute-0 sudo[258768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:35:12 compute-0 sudo[258768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:12 compute-0 sudo[258768]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:13 compute-0 sudo[258793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:13 compute-0 sudo[258793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:13 compute-0 sudo[258793]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:13 compute-0 sudo[258818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:35:13 compute-0 sudo[258818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v908: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.648687665 +0000 UTC m=+0.074069269 container create 4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:35:13 compute-0 systemd[1]: Started libpod-conmon-4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a.scope.
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.620502644 +0000 UTC m=+0.045884298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:35:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.750553156 +0000 UTC m=+0.175934830 container init 4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.762069879 +0000 UTC m=+0.187451483 container start 4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.766909135 +0000 UTC m=+0.192290739 container attach 4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:35:13 compute-0 beautiful_williams[258900]: 167 167
Nov 25 20:35:13 compute-0 systemd[1]: libpod-4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a.scope: Deactivated successfully.
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.768946391 +0000 UTC m=+0.194327995 container died 4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbf2ea69ccfb1091f0032a72109e61d0db456313277a2884ce99a2563b9419e8-merged.mount: Deactivated successfully.
Nov 25 20:35:13 compute-0 podman[258884]: 2025-11-25 20:35:13.817350011 +0000 UTC m=+0.242731615 container remove 4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:35:13 compute-0 systemd[1]: libpod-conmon-4f27fa87e66aab5016f7f3dec2d1f2e81f3ccede17926307b271c75b782bcc0a.scope: Deactivated successfully.
Nov 25 20:35:14 compute-0 podman[258924]: 2025-11-25 20:35:14.07274469 +0000 UTC m=+0.064041689 container create 01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:35:14 compute-0 systemd[1]: Started libpod-conmon-01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f.scope.
Nov 25 20:35:14 compute-0 podman[258924]: 2025-11-25 20:35:14.046438321 +0000 UTC m=+0.037735320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:35:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9cecbfc20aa3c5142306ab5a0e7dd2b193dc4190e6bb02677d48a4e9b36067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9cecbfc20aa3c5142306ab5a0e7dd2b193dc4190e6bb02677d48a4e9b36067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9cecbfc20aa3c5142306ab5a0e7dd2b193dc4190e6bb02677d48a4e9b36067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9cecbfc20aa3c5142306ab5a0e7dd2b193dc4190e6bb02677d48a4e9b36067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:14 compute-0 podman[258924]: 2025-11-25 20:35:14.201905826 +0000 UTC m=+0.193202875 container init 01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:35:14 compute-0 podman[258924]: 2025-11-25 20:35:14.209697545 +0000 UTC m=+0.200994544 container start 01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:35:14 compute-0 podman[258924]: 2025-11-25 20:35:14.214029826 +0000 UTC m=+0.205326865 container attach 01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:35:14 compute-0 ceph-mon[75144]: pgmap v908: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:14 compute-0 bold_yalow[258940]: {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:     "0": [
Nov 25 20:35:14 compute-0 bold_yalow[258940]:         {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "devices": [
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "/dev/loop3"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             ],
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_name": "ceph_lv0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_size": "21470642176",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "name": "ceph_lv0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "tags": {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cluster_name": "ceph",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.crush_device_class": "",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.encrypted": "0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osd_id": "0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.type": "block",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.vdo": "0"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             },
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "type": "block",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "vg_name": "ceph_vg0"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:         }
Nov 25 20:35:14 compute-0 bold_yalow[258940]:     ],
Nov 25 20:35:14 compute-0 bold_yalow[258940]:     "1": [
Nov 25 20:35:14 compute-0 bold_yalow[258940]:         {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "devices": [
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "/dev/loop4"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             ],
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_name": "ceph_lv1",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_size": "21470642176",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "name": "ceph_lv1",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "tags": {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cluster_name": "ceph",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.crush_device_class": "",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.encrypted": "0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osd_id": "1",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.type": "block",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.vdo": "0"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             },
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "type": "block",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "vg_name": "ceph_vg1"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:         }
Nov 25 20:35:14 compute-0 bold_yalow[258940]:     ],
Nov 25 20:35:14 compute-0 bold_yalow[258940]:     "2": [
Nov 25 20:35:14 compute-0 bold_yalow[258940]:         {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "devices": [
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "/dev/loop5"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             ],
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_name": "ceph_lv2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_size": "21470642176",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "name": "ceph_lv2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "tags": {
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.cluster_name": "ceph",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.crush_device_class": "",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.encrypted": "0",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osd_id": "2",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.type": "block",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:                 "ceph.vdo": "0"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             },
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "type": "block",
Nov 25 20:35:14 compute-0 bold_yalow[258940]:             "vg_name": "ceph_vg2"
Nov 25 20:35:14 compute-0 bold_yalow[258940]:         }
Nov 25 20:35:14 compute-0 bold_yalow[258940]:     ]
Nov 25 20:35:14 compute-0 bold_yalow[258940]: }
Nov 25 20:35:14 compute-0 systemd[1]: libpod-01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f.scope: Deactivated successfully.
Nov 25 20:35:14 compute-0 podman[258924]: 2025-11-25 20:35:14.984679748 +0000 UTC m=+0.975976777 container died 01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d9cecbfc20aa3c5142306ab5a0e7dd2b193dc4190e6bb02677d48a4e9b36067-merged.mount: Deactivated successfully.
Nov 25 20:35:15 compute-0 podman[258924]: 2025-11-25 20:35:15.072235717 +0000 UTC m=+1.063532686 container remove 01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_yalow, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:35:15 compute-0 systemd[1]: libpod-conmon-01d1006d1903bf6021a727f300b4cfa9024087df193268ce61393e7daa27642f.scope: Deactivated successfully.
Nov 25 20:35:15 compute-0 sudo[258818]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:15 compute-0 podman[258951]: 2025-11-25 20:35:15.108866585 +0000 UTC m=+0.094832253 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:35:15 compute-0 sudo[258982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:15 compute-0 sudo[258982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:15 compute-0 sudo[258982]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:15 compute-0 sudo[259007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:35:15 compute-0 sudo[259007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:15 compute-0 sudo[259007]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:15 compute-0 sudo[259032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:15 compute-0 sudo[259032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:15 compute-0 sudo[259032]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:15 compute-0 sudo[259057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:35:15 compute-0 sudo[259057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v909: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:15 compute-0 podman[259121]: 2025-11-25 20:35:15.951177669 +0000 UTC m=+0.066498317 container create 9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:35:15 compute-0 systemd[1]: Started libpod-conmon-9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e.scope.
Nov 25 20:35:16 compute-0 podman[259121]: 2025-11-25 20:35:15.923858513 +0000 UTC m=+0.039179221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:35:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:35:16 compute-0 podman[259121]: 2025-11-25 20:35:16.064355887 +0000 UTC m=+0.179676585 container init 9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 20:35:16 compute-0 podman[259121]: 2025-11-25 20:35:16.075928112 +0000 UTC m=+0.191248770 container start 9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:35:16 compute-0 podman[259121]: 2025-11-25 20:35:16.081353244 +0000 UTC m=+0.196673952 container attach 9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:35:16 compute-0 angry_goldstine[259137]: 167 167
Nov 25 20:35:16 compute-0 systemd[1]: libpod-9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e.scope: Deactivated successfully.
Nov 25 20:35:16 compute-0 podman[259121]: 2025-11-25 20:35:16.083352149 +0000 UTC m=+0.198672807 container died 9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0917ec0bc26f7b2cf6d5de065ef1ec16deb6c6be9dc79a96bb146cb6e1775f-merged.mount: Deactivated successfully.
Nov 25 20:35:16 compute-0 podman[259121]: 2025-11-25 20:35:16.135751911 +0000 UTC m=+0.251072539 container remove 9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:35:16 compute-0 systemd[1]: libpod-conmon-9ef6c90c2db352cc0d1f7f3d11b4c7ff63ab2032934d7229317c4fe7e0e8092e.scope: Deactivated successfully.
Nov 25 20:35:16 compute-0 podman[259162]: 2025-11-25 20:35:16.359509822 +0000 UTC m=+0.047386561 container create 5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:35:16 compute-0 systemd[1]: Started libpod-conmon-5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb.scope.
Nov 25 20:35:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:35:16 compute-0 podman[259162]: 2025-11-25 20:35:16.340531569 +0000 UTC m=+0.028408288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b844eb89a0e50a59bfdcc536c081dd5de548d4f33040505a32f2a66dd24a42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b844eb89a0e50a59bfdcc536c081dd5de548d4f33040505a32f2a66dd24a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b844eb89a0e50a59bfdcc536c081dd5de548d4f33040505a32f2a66dd24a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b844eb89a0e50a59bfdcc536c081dd5de548d4f33040505a32f2a66dd24a42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:35:16 compute-0 podman[259162]: 2025-11-25 20:35:16.455689431 +0000 UTC m=+0.143566210 container init 5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:35:16 compute-0 podman[259162]: 2025-11-25 20:35:16.470897809 +0000 UTC m=+0.158774518 container start 5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:35:16 compute-0 podman[259162]: 2025-11-25 20:35:16.474026857 +0000 UTC m=+0.161903646 container attach 5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:35:16 compute-0 ceph-mon[75144]: pgmap v909: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:35:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1921634582' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:35:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:35:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1921634582' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:35:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v910: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:17 compute-0 elated_wozniak[259178]: {
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "osd_id": 2,
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "type": "bluestore"
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:     },
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "osd_id": 1,
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "type": "bluestore"
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:     },
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "osd_id": 0,
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:         "type": "bluestore"
Nov 25 20:35:17 compute-0 elated_wozniak[259178]:     }
Nov 25 20:35:17 compute-0 elated_wozniak[259178]: }
Nov 25 20:35:17 compute-0 systemd[1]: libpod-5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb.scope: Deactivated successfully.
Nov 25 20:35:17 compute-0 systemd[1]: libpod-5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb.scope: Consumed 1.116s CPU time.
Nov 25 20:35:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1921634582' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:35:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1921634582' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:35:17 compute-0 podman[259212]: 2025-11-25 20:35:17.622879136 +0000 UTC m=+0.029368985 container died 5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b844eb89a0e50a59bfdcc536c081dd5de548d4f33040505a32f2a66dd24a42-merged.mount: Deactivated successfully.
Nov 25 20:35:17 compute-0 podman[259212]: 2025-11-25 20:35:17.695246158 +0000 UTC m=+0.101735997 container remove 5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:35:17 compute-0 systemd[1]: libpod-conmon-5972b992474df9291e34d363efc524e22b668a1a1e7652032d2d57bf2e2e8aeb.scope: Deactivated successfully.
Nov 25 20:35:17 compute-0 sudo[259057]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:35:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:35:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:35:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:35:17 compute-0 sudo[259226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:35:17 compute-0 sudo[259226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:17 compute-0 sudo[259226]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:17 compute-0 sudo[259251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:35:17 compute-0 sudo[259251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:35:17 compute-0 sudo[259251]: pam_unix(sudo:session): session closed for user root
Nov 25 20:35:18 compute-0 ceph-mon[75144]: pgmap v910: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:18 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:35:18 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:35:19 compute-0 podman[259276]: 2025-11-25 20:35:19.004194721 +0000 UTC m=+0.090046128 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:35:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v911: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:20 compute-0 ceph-mon[75144]: pgmap v911: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v912: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:22 compute-0 ceph-mon[75144]: pgmap v912: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v913: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:24 compute-0 ceph-mon[75144]: pgmap v913: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v914: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:26 compute-0 podman[259296]: 2025-11-25 20:35:26.024431958 +0000 UTC m=+0.117161020 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 20:35:26 compute-0 ceph-mon[75144]: pgmap v914: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:35:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:35:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:35:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:35:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:35:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:35:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v915: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:28 compute-0 ceph-mon[75144]: pgmap v915: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v916: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:29 compute-0 ceph-mon[75144]: pgmap v916: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v917: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:32 compute-0 ceph-mon[75144]: pgmap v917: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v918: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:34 compute-0 ceph-mon[75144]: pgmap v918: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v919: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:36 compute-0 nova_compute[248866]: 2025-11-25 20:35:36.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:36 compute-0 nova_compute[248866]: 2025-11-25 20:35:36.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:36 compute-0 nova_compute[248866]: 2025-11-25 20:35:36.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:36 compute-0 nova_compute[248866]: 2025-11-25 20:35:36.041 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 20:35:36 compute-0 nova_compute[248866]: 2025-11-25 20:35:36.055 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 20:35:36 compute-0 ceph-mon[75144]: pgmap v919: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:37 compute-0 nova_compute[248866]: 2025-11-25 20:35:37.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:37 compute-0 nova_compute[248866]: 2025-11-25 20:35:37.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 20:35:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v920: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:38 compute-0 nova_compute[248866]: 2025-11-25 20:35:38.548 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:38 compute-0 ceph-mon[75144]: pgmap v920: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:39 compute-0 nova_compute[248866]: 2025-11-25 20:35:39.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:39 compute-0 nova_compute[248866]: 2025-11-25 20:35:39.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:39 compute-0 nova_compute[248866]: 2025-11-25 20:35:39.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:35:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v921: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:40 compute-0 nova_compute[248866]: 2025-11-25 20:35:40.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:40 compute-0 ceph-mon[75144]: pgmap v921: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.057 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.058 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.087 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.087 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.088 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.088 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.089 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:35:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v922: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:35:41 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066982892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.567 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:35:41 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1066982892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.802 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.803 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5297MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.803 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.804 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.871 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.872 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:35:41 compute-0 nova_compute[248866]: 2025-11-25 20:35:41.895 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:35:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:35:42 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639755747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:35:42 compute-0 nova_compute[248866]: 2025-11-25 20:35:42.363 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:35:42 compute-0 nova_compute[248866]: 2025-11-25 20:35:42.372 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:35:42 compute-0 nova_compute[248866]: 2025-11-25 20:35:42.396 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:35:42 compute-0 nova_compute[248866]: 2025-11-25 20:35:42.398 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:35:42 compute-0 nova_compute[248866]: 2025-11-25 20:35:42.399 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:35:42 compute-0 ceph-mon[75144]: pgmap v922: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:42 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/639755747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:35:43 compute-0 nova_compute[248866]: 2025-11-25 20:35:43.380 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:43 compute-0 nova_compute[248866]: 2025-11-25 20:35:43.399 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:43 compute-0 nova_compute[248866]: 2025-11-25 20:35:43.399 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:35:43 compute-0 nova_compute[248866]: 2025-11-25 20:35:43.400 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:35:43 compute-0 nova_compute[248866]: 2025-11-25 20:35:43.414 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:35:43 compute-0 nova_compute[248866]: 2025-11-25 20:35:43.414 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v923: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:43 compute-0 ceph-mon[75144]: pgmap v923: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:45 compute-0 nova_compute[248866]: 2025-11-25 20:35:45.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:35:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v924: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:45 compute-0 podman[259368]: 2025-11-25 20:35:45.961796794 +0000 UTC m=+0.057963838 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 20:35:46 compute-0 ceph-mon[75144]: pgmap v924: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v925: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:48 compute-0 ceph-mon[75144]: pgmap v925: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:35:48.950 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:35:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:35:48.951 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:35:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:35:48.951 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:35:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v926: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:50 compute-0 podman[259389]: 2025-11-25 20:35:50.00383529 +0000 UTC m=+0.089844853 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 20:35:50 compute-0 ceph-mon[75144]: pgmap v926: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v927: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:51 compute-0 ceph-mon[75144]: pgmap v927: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v928: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:53 compute-0 ceph-mon[75144]: pgmap v928: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:35:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v929: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:55 compute-0 ceph-mon[75144]: pgmap v929: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:35:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:35:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:35:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:35:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:35:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:35:56 compute-0 podman[259409]: 2025-11-25 20:35:56.986346359 +0000 UTC m=+0.085020317 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:35:57
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups']
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:35:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v930: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:58 compute-0 ceph-mon[75144]: pgmap v930: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:35:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v931: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:00 compute-0 ceph-mon[75144]: pgmap v931: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v932: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:36:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:36:02 compute-0 ceph-mon[75144]: pgmap v932: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v933: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:04 compute-0 ceph-mon[75144]: pgmap v933: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v934: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:06 compute-0 ceph-mon[75144]: pgmap v934: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v935: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:08 compute-0 ceph-mon[75144]: pgmap v935: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v936: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:10 compute-0 ceph-mon[75144]: pgmap v936: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v937: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:11 compute-0 ceph-mon[75144]: pgmap v937: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v938: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:14 compute-0 ceph-mon[75144]: pgmap v938: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v939: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:16 compute-0 ceph-mon[75144]: pgmap v939: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:16 compute-0 podman[259435]: 2025-11-25 20:36:16.988218769 +0000 UTC m=+0.070360076 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:36:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:36:16 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200042302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:36:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:36:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200042302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:36:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v940: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/200042302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:36:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/200042302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:36:18 compute-0 sudo[259454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:18 compute-0 sudo[259454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:18 compute-0 sudo[259454]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:18 compute-0 sudo[259479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:36:18 compute-0 sudo[259479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:18 compute-0 sudo[259479]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:18 compute-0 sudo[259504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:18 compute-0 sudo[259504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:18 compute-0 sudo[259504]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:18 compute-0 sudo[259529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:36:18 compute-0 sudo[259529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:18 compute-0 ceph-mon[75144]: pgmap v940: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:18 compute-0 sudo[259529]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 sudo[259586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:19 compute-0 sudo[259586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 sudo[259586]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 sudo[259611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:36:19 compute-0 sudo[259611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 sudo[259611]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 sudo[259636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:19 compute-0 sudo[259636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 sudo[259636]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 sudo[259661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 20:36:19 compute-0 sudo[259661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v941: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:19 compute-0 sudo[259661]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:19 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev afc4d9af-fe25-4927-8cd4-e98859ebc450 does not exist
Nov 25 20:36:19 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 7a481869-dc76-4fc2-bb75-f0db12c29622 does not exist
Nov 25 20:36:19 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 49d8d99b-c35b-4996-bcc0-231a21816598 does not exist
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:36:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:36:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:36:19 compute-0 sudo[259705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:19 compute-0 sudo[259705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 sudo[259705]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 sudo[259730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:36:19 compute-0 sudo[259730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 sudo[259730]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:19 compute-0 sudo[259755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:19 compute-0 sudo[259755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:19 compute-0 sudo[259755]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:20 compute-0 sudo[259780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:36:20 compute-0 sudo[259780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.444263515 +0000 UTC m=+0.064351827 container create a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:36:20 compute-0 systemd[1]: Started libpod-conmon-a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c.scope.
Nov 25 20:36:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.418499102 +0000 UTC m=+0.038587464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.534699343 +0000 UTC m=+0.154787655 container init a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.547147923 +0000 UTC m=+0.167236205 container start a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.551358971 +0000 UTC m=+0.171447283 container attach a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:36:20 compute-0 competent_bell[259863]: 167 167
Nov 25 20:36:20 compute-0 systemd[1]: libpod-a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c.scope: Deactivated successfully.
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.558258965 +0000 UTC m=+0.178347287 container died a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e961265f5177f0aaa593c7dd95dc6cf0e995cf6299e56a00538090e962f5acde-merged.mount: Deactivated successfully.
Nov 25 20:36:20 compute-0 podman[259846]: 2025-11-25 20:36:20.626062658 +0000 UTC m=+0.246150940 container remove a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:36:20 compute-0 ceph-mon[75144]: pgmap v941: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:36:20 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:36:20 compute-0 podman[259860]: 2025-11-25 20:36:20.626995594 +0000 UTC m=+0.129683661 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 20:36:20 compute-0 systemd[1]: libpod-conmon-a087a4a3025ec98d7af8c63486aaa4966fa174efa1191a4519ee9238dd7a090c.scope: Deactivated successfully.
Nov 25 20:36:20 compute-0 podman[259906]: 2025-11-25 20:36:20.860726966 +0000 UTC m=+0.061091746 container create 97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:20 compute-0 systemd[1]: Started libpod-conmon-97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad.scope.
Nov 25 20:36:20 compute-0 podman[259906]: 2025-11-25 20:36:20.840664053 +0000 UTC m=+0.041028833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:36:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0d6a792d55bc71ae9d95a223ce4896749b9e90b6a206b5971b597aebe9f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0d6a792d55bc71ae9d95a223ce4896749b9e90b6a206b5971b597aebe9f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0d6a792d55bc71ae9d95a223ce4896749b9e90b6a206b5971b597aebe9f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0d6a792d55bc71ae9d95a223ce4896749b9e90b6a206b5971b597aebe9f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0d6a792d55bc71ae9d95a223ce4896749b9e90b6a206b5971b597aebe9f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:20 compute-0 podman[259906]: 2025-11-25 20:36:20.975722824 +0000 UTC m=+0.176087644 container init 97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:36:20 compute-0 podman[259906]: 2025-11-25 20:36:20.986095905 +0000 UTC m=+0.186460645 container start 97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:36:20 compute-0 podman[259906]: 2025-11-25 20:36:20.989632314 +0000 UTC m=+0.189997155 container attach 97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:36:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v942: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:22 compute-0 charming_leakey[259923]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:36:22 compute-0 charming_leakey[259923]: --> relative data size: 1.0
Nov 25 20:36:22 compute-0 charming_leakey[259923]: --> All data devices are unavailable
Nov 25 20:36:22 compute-0 systemd[1]: libpod-97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad.scope: Deactivated successfully.
Nov 25 20:36:22 compute-0 systemd[1]: libpod-97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad.scope: Consumed 1.129s CPU time.
Nov 25 20:36:22 compute-0 podman[259906]: 2025-11-25 20:36:22.171886792 +0000 UTC m=+1.372251582 container died 97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-32ee0d6a792d55bc71ae9d95a223ce4896749b9e90b6a206b5971b597aebe9f0-merged.mount: Deactivated successfully.
Nov 25 20:36:22 compute-0 podman[259906]: 2025-11-25 20:36:22.257018032 +0000 UTC m=+1.457382802 container remove 97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leakey, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 20:36:22 compute-0 systemd[1]: libpod-conmon-97bef7e55875c82d9436128827da754c0f4188b9ba77835ec9e450ffd267b3ad.scope: Deactivated successfully.
Nov 25 20:36:22 compute-0 sudo[259780]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:22 compute-0 sudo[259966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:22 compute-0 sudo[259966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:22 compute-0 sudo[259966]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:22 compute-0 sudo[259991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:36:22 compute-0 sudo[259991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:22 compute-0 sudo[259991]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:22 compute-0 sudo[260016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:22 compute-0 sudo[260016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:22 compute-0 sudo[260016]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:22 compute-0 ceph-mon[75144]: pgmap v942: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:22 compute-0 sudo[260041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:36:22 compute-0 sudo[260041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.194453477 +0000 UTC m=+0.064400519 container create a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:36:23 compute-0 systemd[1]: Started libpod-conmon-a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c.scope.
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.168474788 +0000 UTC m=+0.038421890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:36:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.290775401 +0000 UTC m=+0.160722443 container init a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.303325904 +0000 UTC m=+0.173272926 container start a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brattain, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.307689815 +0000 UTC m=+0.177636867 container attach a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:36:23 compute-0 recursing_brattain[260124]: 167 167
Nov 25 20:36:23 compute-0 systemd[1]: libpod-a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c.scope: Deactivated successfully.
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.311250706 +0000 UTC m=+0.181197728 container died a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-504bf4814931c3dc7aaebcfa29eb32d28eae75e2e6911a39a75034d2ee13ce46-merged.mount: Deactivated successfully.
Nov 25 20:36:23 compute-0 podman[260107]: 2025-11-25 20:36:23.357037921 +0000 UTC m=+0.226984943 container remove a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:36:23 compute-0 systemd[1]: libpod-conmon-a57cf707b88d9338abd1e006c0e5a9e717f64d2a830ba5a13127258ad562d16c.scope: Deactivated successfully.
Nov 25 20:36:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v943: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:23 compute-0 podman[260147]: 2025-11-25 20:36:23.640792516 +0000 UTC m=+0.109930516 container create 6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gould, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:36:23 compute-0 systemd[1]: Started libpod-conmon-6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457.scope.
Nov 25 20:36:23 compute-0 ceph-mon[75144]: pgmap v943: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:23 compute-0 podman[260147]: 2025-11-25 20:36:23.611115644 +0000 UTC m=+0.080253704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:36:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e94f958896a7cb1b308bc89337cc150ec9f537e91f1803547751dd6d6664227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e94f958896a7cb1b308bc89337cc150ec9f537e91f1803547751dd6d6664227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e94f958896a7cb1b308bc89337cc150ec9f537e91f1803547751dd6d6664227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e94f958896a7cb1b308bc89337cc150ec9f537e91f1803547751dd6d6664227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:23 compute-0 podman[260147]: 2025-11-25 20:36:23.737704177 +0000 UTC m=+0.206842227 container init 6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:36:23 compute-0 podman[260147]: 2025-11-25 20:36:23.752016959 +0000 UTC m=+0.221154959 container start 6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:36:23 compute-0 podman[260147]: 2025-11-25 20:36:23.756931087 +0000 UTC m=+0.226069117 container attach 6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:24 compute-0 sleepy_gould[260163]: {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:     "0": [
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:         {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "devices": [
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "/dev/loop3"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             ],
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_name": "ceph_lv0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_size": "21470642176",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "name": "ceph_lv0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "tags": {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cluster_name": "ceph",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.crush_device_class": "",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.encrypted": "0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osd_id": "0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.type": "block",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.vdo": "0"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             },
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "type": "block",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "vg_name": "ceph_vg0"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:         }
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:     ],
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:     "1": [
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:         {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "devices": [
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "/dev/loop4"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             ],
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_name": "ceph_lv1",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_size": "21470642176",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "name": "ceph_lv1",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "tags": {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cluster_name": "ceph",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.crush_device_class": "",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.encrypted": "0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osd_id": "1",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.type": "block",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.vdo": "0"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             },
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "type": "block",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "vg_name": "ceph_vg1"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:         }
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:     ],
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:     "2": [
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:         {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "devices": [
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "/dev/loop5"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             ],
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_name": "ceph_lv2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_size": "21470642176",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "name": "ceph_lv2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "tags": {
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.cluster_name": "ceph",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.crush_device_class": "",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.encrypted": "0",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osd_id": "2",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.type": "block",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:                 "ceph.vdo": "0"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             },
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "type": "block",
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:             "vg_name": "ceph_vg2"
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:         }
Nov 25 20:36:24 compute-0 sleepy_gould[260163]:     ]
Nov 25 20:36:24 compute-0 sleepy_gould[260163]: }
Nov 25 20:36:24 compute-0 systemd[1]: libpod-6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457.scope: Deactivated successfully.
Nov 25 20:36:24 compute-0 podman[260147]: 2025-11-25 20:36:24.537836897 +0000 UTC m=+1.006974927 container died 6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e94f958896a7cb1b308bc89337cc150ec9f537e91f1803547751dd6d6664227-merged.mount: Deactivated successfully.
Nov 25 20:36:24 compute-0 podman[260147]: 2025-11-25 20:36:24.610390064 +0000 UTC m=+1.079528064 container remove 6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gould, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:36:24 compute-0 systemd[1]: libpod-conmon-6ef76129027674fc757381c2aa8f6013f5468ff73ad961138644d69e2d363457.scope: Deactivated successfully.
Nov 25 20:36:24 compute-0 sudo[260041]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:24 compute-0 sudo[260185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:24 compute-0 sudo[260185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:24 compute-0 sudo[260185]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:24 compute-0 sudo[260210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:36:24 compute-0 sudo[260210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:24 compute-0 sudo[260210]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:24 compute-0 sudo[260235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:24 compute-0 sudo[260235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:24 compute-0 sudo[260235]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:24 compute-0 sudo[260260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:36:24 compute-0 sudo[260260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.422047988 +0000 UTC m=+0.070937862 container create 396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:36:25 compute-0 systemd[1]: Started libpod-conmon-396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b.scope.
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.392456897 +0000 UTC m=+0.041346841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:36:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.529822673 +0000 UTC m=+0.178712527 container init 396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mclean, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v944: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.539522046 +0000 UTC m=+0.188411890 container start 396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mclean, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.543682522 +0000 UTC m=+0.192572356 container attach 396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mclean, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:36:25 compute-0 naughty_mclean[260341]: 167 167
Nov 25 20:36:25 compute-0 systemd[1]: libpod-396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b.scope: Deactivated successfully.
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.545483003 +0000 UTC m=+0.194372847 container died 396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-df5c1df91db78963ca9a9dbfb7fbf944fb62101a5cf4a72a4e02aa8e5eaae431-merged.mount: Deactivated successfully.
Nov 25 20:36:25 compute-0 podman[260325]: 2025-11-25 20:36:25.5799316 +0000 UTC m=+0.228821434 container remove 396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mclean, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:36:25 compute-0 systemd[1]: libpod-conmon-396a06b263ee7b13f3a30df8665e582f84bb4eb64e127cae10d14b8d9bcc3b8b.scope: Deactivated successfully.
Nov 25 20:36:25 compute-0 podman[260364]: 2025-11-25 20:36:25.75271332 +0000 UTC m=+0.045806676 container create 9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:36:25 compute-0 systemd[1]: Started libpod-conmon-9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432.scope.
Nov 25 20:36:25 compute-0 podman[260364]: 2025-11-25 20:36:25.73203171 +0000 UTC m=+0.025125146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:36:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023867864f17580bbed1e8d01845f3986d2896e3254a77dcd9a5782205349bf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023867864f17580bbed1e8d01845f3986d2896e3254a77dcd9a5782205349bf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023867864f17580bbed1e8d01845f3986d2896e3254a77dcd9a5782205349bf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023867864f17580bbed1e8d01845f3986d2896e3254a77dcd9a5782205349bf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:36:25 compute-0 podman[260364]: 2025-11-25 20:36:25.866065543 +0000 UTC m=+0.159158969 container init 9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:36:25 compute-0 podman[260364]: 2025-11-25 20:36:25.880711784 +0000 UTC m=+0.173805160 container start 9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pike, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:36:25 compute-0 podman[260364]: 2025-11-25 20:36:25.885545689 +0000 UTC m=+0.178639095 container attach 9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:36:26 compute-0 ceph-mon[75144]: pgmap v944: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:36:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:36:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:36:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:36:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:36:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:36:27 compute-0 eloquent_pike[260381]: {
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "osd_id": 2,
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "type": "bluestore"
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:     },
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "osd_id": 1,
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "type": "bluestore"
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:     },
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "osd_id": 0,
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:         "type": "bluestore"
Nov 25 20:36:27 compute-0 eloquent_pike[260381]:     }
Nov 25 20:36:27 compute-0 eloquent_pike[260381]: }
Nov 25 20:36:27 compute-0 systemd[1]: libpod-9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432.scope: Deactivated successfully.
Nov 25 20:36:27 compute-0 podman[260364]: 2025-11-25 20:36:27.089436314 +0000 UTC m=+1.382529670 container died 9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pike, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:36:27 compute-0 systemd[1]: libpod-9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432.scope: Consumed 1.217s CPU time.
Nov 25 20:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-023867864f17580bbed1e8d01845f3986d2896e3254a77dcd9a5782205349bf7-merged.mount: Deactivated successfully.
Nov 25 20:36:27 compute-0 podman[260364]: 2025-11-25 20:36:27.168423701 +0000 UTC m=+1.461517047 container remove 9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pike, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:36:27 compute-0 systemd[1]: libpod-conmon-9cd7ba33b69ea97e826b25f9e34dee76b435d376557ca8a853ec98fb582ed432.scope: Deactivated successfully.
Nov 25 20:36:27 compute-0 sudo[260260]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:36:27 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:36:27 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:27 compute-0 podman[260415]: 2025-11-25 20:36:27.282778481 +0000 UTC m=+0.147216583 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 20:36:27 compute-0 sudo[260449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:36:27 compute-0 sudo[260449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:27 compute-0 sudo[260449]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:27 compute-0 sudo[260480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:36:27 compute-0 sudo[260480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:36:27 compute-0 sudo[260480]: pam_unix(sudo:session): session closed for user root
Nov 25 20:36:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v945: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:36:28 compute-0 ceph-mon[75144]: pgmap v945: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v946: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:30 compute-0 ceph-mon[75144]: pgmap v946: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v947: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:32 compute-0 ceph-mon[75144]: pgmap v947: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v948: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:34 compute-0 ceph-mon[75144]: pgmap v948: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v949: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:36 compute-0 nova_compute[248866]: 2025-11-25 20:36:36.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:36 compute-0 ceph-mon[75144]: pgmap v949: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v950: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:38 compute-0 nova_compute[248866]: 2025-11-25 20:36:38.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:38 compute-0 ceph-mon[75144]: pgmap v950: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:39 compute-0 nova_compute[248866]: 2025-11-25 20:36:39.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:39 compute-0 nova_compute[248866]: 2025-11-25 20:36:39.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:36:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v951: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:40 compute-0 ceph-mon[75144]: pgmap v951: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:41 compute-0 nova_compute[248866]: 2025-11-25 20:36:41.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v952: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:42 compute-0 ceph-mon[75144]: pgmap v952: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.071 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.072 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.072 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.073 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.073 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:36:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v953: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:36:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/908872229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.563 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:36:43 compute-0 ceph-mon[75144]: pgmap v953: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:43 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/908872229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.817 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.819 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5281MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.820 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:36:43 compute-0 nova_compute[248866]: 2025-11-25 20:36:43.820 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.118 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.118 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.238 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing inventories for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.343 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating ProviderTree inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.344 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.361 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing aggregate associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.417 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing trait associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, traits: HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.433 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:36:44 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:36:44 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941690042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.890 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.897 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.915 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.916 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:36:44 compute-0 nova_compute[248866]: 2025-11-25 20:36:44.917 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:36:44 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2941690042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:36:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v954: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:45 compute-0 nova_compute[248866]: 2025-11-25 20:36:45.917 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:45 compute-0 nova_compute[248866]: 2025-11-25 20:36:45.918 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:36:45 compute-0 nova_compute[248866]: 2025-11-25 20:36:45.918 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:36:45 compute-0 ceph-mon[75144]: pgmap v954: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:45 compute-0 nova_compute[248866]: 2025-11-25 20:36:45.938 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:36:45 compute-0 nova_compute[248866]: 2025-11-25 20:36:45.938 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:46 compute-0 nova_compute[248866]: 2025-11-25 20:36:46.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:36:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v955: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:48 compute-0 podman[260549]: 2025-11-25 20:36:48.010520487 +0000 UTC m=+0.093896197 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 20:36:48 compute-0 ceph-mon[75144]: pgmap v955: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:36:48.952 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:36:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:36:48.953 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:36:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:36:48.953 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:36:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v956: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:50 compute-0 ceph-mon[75144]: pgmap v956: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:51 compute-0 podman[260568]: 2025-11-25 20:36:51.007717083 +0000 UTC m=+0.091780831 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:36:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v957: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:52 compute-0 ceph-mon[75144]: pgmap v957: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v958: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:54 compute-0 ceph-mon[75144]: pgmap v958: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:36:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v959: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:36:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:36:56 compute-0 ceph-mon[75144]: pgmap v959: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:36:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:36:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:36:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:36:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:36:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:36:57
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes']
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:36:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v960: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:57 compute-0 ceph-mon[75144]: pgmap v960: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:36:58 compute-0 podman[260588]: 2025-11-25 20:36:58.011650767 +0000 UTC m=+0.113898025 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:36:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v961: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:00 compute-0 ceph-mon[75144]: pgmap v961: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:37:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:37:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v962: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:37:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:37:02 compute-0 ceph-mon[75144]: pgmap v962: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v963: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Check health
Nov 25 20:37:04 compute-0 ceph-mon[75144]: pgmap v963: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v964: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:37:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:37:06 compute-0 ceph-mon[75144]: pgmap v964: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v965: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:07 compute-0 ceph-mon[75144]: pgmap v965: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v966: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:10 compute-0 ceph-mon[75144]: pgmap v966: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.632731) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103030632849, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1760, "num_deletes": 251, "total_data_size": 1911311, "memory_usage": 1942848, "flush_reason": "Manual Compaction"}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103030647570, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1857905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18333, "largest_seqno": 20092, "table_properties": {"data_size": 1849907, "index_size": 4817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16288, "raw_average_key_size": 19, "raw_value_size": 1833872, "raw_average_value_size": 2236, "num_data_blocks": 221, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764102840, "oldest_key_time": 1764102840, "file_creation_time": 1764103030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14885 microseconds, and 9439 cpu microseconds.
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.647625) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1857905 bytes OK
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.647649) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.649546) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.649570) EVENT_LOG_v1 {"time_micros": 1764103030649563, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.649591) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1903804, prev total WAL file size 1903804, number of live WAL files 2.
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.650687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1814KB)], [47(4702KB)]
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103030650760, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 6673180, "oldest_snapshot_seqno": -1}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 3737 keys, 5505144 bytes, temperature: kUnknown
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103030698384, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 5505144, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5478660, "index_size": 16071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89713, "raw_average_key_size": 24, "raw_value_size": 5410017, "raw_average_value_size": 1447, "num_data_blocks": 690, "num_entries": 3737, "num_filter_entries": 3737, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.700091) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 5505144 bytes
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.701911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.8 rd, 115.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.6 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 4251, records dropped: 514 output_compression: NoCompression
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.701952) EVENT_LOG_v1 {"time_micros": 1764103030701935, "job": 24, "event": "compaction_finished", "compaction_time_micros": 47726, "compaction_time_cpu_micros": 25240, "output_level": 6, "num_output_files": 1, "total_output_size": 5505144, "num_input_records": 4251, "num_output_records": 3737, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103030703066, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103030704965, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.650556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.705179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.705189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.705192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.705195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:37:10 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:37:10.705200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:37:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v967: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:12 compute-0 ceph-mon[75144]: pgmap v967: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v968: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:14 compute-0 ceph-mon[75144]: pgmap v968: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v969: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:15 compute-0 ceph-mon[75144]: pgmap v969: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:37:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3125684792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:37:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:37:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3125684792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:37:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3125684792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:37:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3125684792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:37:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v970: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:18 compute-0 ceph-mon[75144]: pgmap v970: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:18 compute-0 podman[260616]: 2025-11-25 20:37:18.967555575 +0000 UTC m=+0.073204190 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:37:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v971: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:20 compute-0 ceph-mon[75144]: pgmap v971: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v972: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:21 compute-0 podman[260635]: 2025-11-25 20:37:21.995265993 +0000 UTC m=+0.085634535 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:22 compute-0 ceph-mon[75144]: pgmap v972: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v973: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:24 compute-0 ceph-mon[75144]: pgmap v973: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v974: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:26 compute-0 ceph-mon[75144]: pgmap v974: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:37:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:37:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:37:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:37:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:37:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:37:27 compute-0 sudo[260653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:27 compute-0 sudo[260653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:27 compute-0 sudo[260653]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v975: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:27 compute-0 sudo[260678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:37:27 compute-0 sudo[260678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:27 compute-0 sudo[260678]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:27 compute-0 sudo[260703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:27 compute-0 sudo[260703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:27 compute-0 sudo[260703]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:27 compute-0 sudo[260728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 20:37:27 compute-0 sudo[260728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:28 compute-0 sudo[260728]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:37:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:37:28 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:28 compute-0 sudo[260789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:28 compute-0 sudo[260789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:28 compute-0 sudo[260789]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:28 compute-0 podman[260771]: 2025-11-25 20:37:28.254629154 +0000 UTC m=+0.185194274 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:28 compute-0 sudo[260821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:37:28 compute-0 sudo[260821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:28 compute-0 sudo[260821]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:28 compute-0 sudo[260849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:28 compute-0 sudo[260849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:28 compute-0 sudo[260849]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:28 compute-0 sudo[260874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:37:28 compute-0 sudo[260874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:28 compute-0 ceph-mon[75144]: pgmap v975: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:28 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:29 compute-0 sudo[260874]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:29 compute-0 sudo[260930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:29 compute-0 sudo[260930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:29 compute-0 sudo[260930]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:29 compute-0 sudo[260955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:37:29 compute-0 sudo[260955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:29 compute-0 sudo[260955]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:29 compute-0 sudo[260980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:29 compute-0 sudo[260980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:29 compute-0 sudo[260980]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v976: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:29 compute-0 sudo[261005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- inventory --format=json-pretty --filter-for-batch
Nov 25 20:37:29 compute-0 sudo[261005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:29 compute-0 ceph-mon[75144]: pgmap v976: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:30 compute-0 podman[261070]: 2025-11-25 20:37:30.041233361 +0000 UTC m=+0.066648795 container create 3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sammet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:37:30 compute-0 systemd[1]: Started libpod-conmon-3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a.scope.
Nov 25 20:37:30 compute-0 podman[261070]: 2025-11-25 20:37:30.012522858 +0000 UTC m=+0.037938332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:30 compute-0 podman[261070]: 2025-11-25 20:37:30.148778773 +0000 UTC m=+0.174194247 container init 3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:37:30 compute-0 podman[261070]: 2025-11-25 20:37:30.161182178 +0000 UTC m=+0.186597612 container start 3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sammet, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:37:30 compute-0 podman[261070]: 2025-11-25 20:37:30.165023191 +0000 UTC m=+0.190438615 container attach 3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:37:30 compute-0 goofy_sammet[261086]: 167 167
Nov 25 20:37:30 compute-0 systemd[1]: libpod-3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a.scope: Deactivated successfully.
Nov 25 20:37:30 compute-0 podman[261091]: 2025-11-25 20:37:30.254697473 +0000 UTC m=+0.055687619 container died 3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5931c655b24661cb134387086b44b7564181c534a774d18ef5aa2fff0954f312-merged.mount: Deactivated successfully.
Nov 25 20:37:30 compute-0 podman[261091]: 2025-11-25 20:37:30.306698523 +0000 UTC m=+0.107688659 container remove 3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sammet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:37:30 compute-0 systemd[1]: libpod-conmon-3d973ccc712f05ef766652946eeba0ea37a181a12ded51df84442e1c688f860a.scope: Deactivated successfully.
Nov 25 20:37:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:30 compute-0 podman[261113]: 2025-11-25 20:37:30.578694701 +0000 UTC m=+0.072611085 container create dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 20:37:30 compute-0 systemd[1]: Started libpod-conmon-dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11.scope.
Nov 25 20:37:30 compute-0 podman[261113]: 2025-11-25 20:37:30.550163353 +0000 UTC m=+0.044079767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81118fb1c855209e12fd85e86da0a0c2e59e34e9a8d721ad4a587755b68d1d93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81118fb1c855209e12fd85e86da0a0c2e59e34e9a8d721ad4a587755b68d1d93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81118fb1c855209e12fd85e86da0a0c2e59e34e9a8d721ad4a587755b68d1d93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81118fb1c855209e12fd85e86da0a0c2e59e34e9a8d721ad4a587755b68d1d93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:30 compute-0 podman[261113]: 2025-11-25 20:37:30.700475576 +0000 UTC m=+0.194391950 container init dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:37:30 compute-0 podman[261113]: 2025-11-25 20:37:30.714499354 +0000 UTC m=+0.208415718 container start dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:37:30 compute-0 podman[261113]: 2025-11-25 20:37:30.719066317 +0000 UTC m=+0.212982661 container attach dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:37:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v977: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:31 compute-0 ceph-mon[75144]: pgmap v977: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]: [
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:     {
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "available": false,
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "ceph_device": false,
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "lsm_data": {},
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "lvs": [],
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "path": "/dev/sr0",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "rejected_reasons": [
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "Insufficient space (<5GB)",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "Has a FileSystem"
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         ],
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         "sys_api": {
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "actuators": null,
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "device_nodes": "sr0",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "devname": "sr0",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "human_readable_size": "482.00 KB",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "id_bus": "ata",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "model": "QEMU DVD-ROM",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "nr_requests": "2",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "parent": "/dev/sr0",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "partitions": {},
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "path": "/dev/sr0",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "removable": "1",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "rev": "2.5+",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "ro": "0",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "rotational": "1",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "sas_address": "",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "sas_device_handle": "",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "scheduler_mode": "mq-deadline",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "sectors": 0,
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "sectorsize": "2048",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "size": 493568.0,
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "support_discard": "2048",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "type": "disk",
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:             "vendor": "QEMU"
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:         }
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]:     }
Nov 25 20:37:32 compute-0 fervent_dhawan[261130]: ]
Nov 25 20:37:32 compute-0 systemd[1]: libpod-dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11.scope: Deactivated successfully.
Nov 25 20:37:32 compute-0 systemd[1]: libpod-dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11.scope: Consumed 1.658s CPU time.
Nov 25 20:37:32 compute-0 podman[261113]: 2025-11-25 20:37:32.350605681 +0000 UTC m=+1.844522035 container died dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-81118fb1c855209e12fd85e86da0a0c2e59e34e9a8d721ad4a587755b68d1d93-merged.mount: Deactivated successfully.
Nov 25 20:37:32 compute-0 podman[261113]: 2025-11-25 20:37:32.412502287 +0000 UTC m=+1.906418621 container remove dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:37:32 compute-0 systemd[1]: libpod-conmon-dd4279f7bec40b14b575aaeba3f948625d8fd1e6ae7de070be7487de6b018f11.scope: Deactivated successfully.
Nov 25 20:37:32 compute-0 sudo[261005]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:32 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 34e18f68-89a1-45c0-a556-648bd79d69b0 does not exist
Nov 25 20:37:32 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev bbc08613-7272-49dc-b5c7-ededd98e879d does not exist
Nov 25 20:37:32 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 8676b573-2698-43d1-8dd4-4ab005ff8d93 does not exist
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:37:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:37:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:37:32 compute-0 sudo[262933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:32 compute-0 sudo[262933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:32 compute-0 sudo[262933]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:32 compute-0 sudo[262958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:37:32 compute-0 sudo[262958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:32 compute-0 sudo[262958]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:32 compute-0 sudo[262983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:32 compute-0 sudo[262983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:32 compute-0 sudo[262983]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:32 compute-0 sudo[263008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:37:32 compute-0 sudo[263008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.219061346 +0000 UTC m=+0.060203550 container create 8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:33 compute-0 systemd[1]: Started libpod-conmon-8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775.scope.
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.196426377 +0000 UTC m=+0.037568601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.312097479 +0000 UTC m=+0.153239733 container init 8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.323900667 +0000 UTC m=+0.165042881 container start 8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.329476187 +0000 UTC m=+0.170618471 container attach 8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carson, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:37:33 compute-0 vigorous_carson[263090]: 167 167
Nov 25 20:37:33 compute-0 systemd[1]: libpod-8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775.scope: Deactivated successfully.
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.331439029 +0000 UTC m=+0.172581243 container died 8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc6fbc8fc68cbc142aa1d3d5d70ceb152e16565e49217d5d8f0af4c482fed87-merged.mount: Deactivated successfully.
Nov 25 20:37:33 compute-0 podman[263074]: 2025-11-25 20:37:33.383776598 +0000 UTC m=+0.224918782 container remove 8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:37:33 compute-0 systemd[1]: libpod-conmon-8cc1f770ecee5e03eac12da49aab49cc0046c3d124a9e9c4f69554367e47e775.scope: Deactivated successfully.
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:37:33 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:37:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v978: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:33 compute-0 podman[263114]: 2025-11-25 20:37:33.640104615 +0000 UTC m=+0.064185048 container create 278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:33 compute-0 systemd[1]: Started libpod-conmon-278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92.scope.
Nov 25 20:37:33 compute-0 podman[263114]: 2025-11-25 20:37:33.618711909 +0000 UTC m=+0.042792422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4db4015816e3a04b13c789d8c50dd5b5ec2335f39b2676a15b0a0fd5d1288a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4db4015816e3a04b13c789d8c50dd5b5ec2335f39b2676a15b0a0fd5d1288a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4db4015816e3a04b13c789d8c50dd5b5ec2335f39b2676a15b0a0fd5d1288a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4db4015816e3a04b13c789d8c50dd5b5ec2335f39b2676a15b0a0fd5d1288a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4db4015816e3a04b13c789d8c50dd5b5ec2335f39b2676a15b0a0fd5d1288a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:33 compute-0 podman[263114]: 2025-11-25 20:37:33.75887613 +0000 UTC m=+0.182956603 container init 278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:37:33 compute-0 podman[263114]: 2025-11-25 20:37:33.769231618 +0000 UTC m=+0.193312051 container start 278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:37:33 compute-0 podman[263114]: 2025-11-25 20:37:33.773486492 +0000 UTC m=+0.197566925 container attach 278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:37:34 compute-0 ceph-mon[75144]: pgmap v978: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:34 compute-0 friendly_lovelace[263130]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:37:34 compute-0 friendly_lovelace[263130]: --> relative data size: 1.0
Nov 25 20:37:34 compute-0 friendly_lovelace[263130]: --> All data devices are unavailable
Nov 25 20:37:34 compute-0 systemd[1]: libpod-278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92.scope: Deactivated successfully.
Nov 25 20:37:34 compute-0 podman[263114]: 2025-11-25 20:37:34.961951627 +0000 UTC m=+1.386032120 container died 278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:37:34 compute-0 systemd[1]: libpod-278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92.scope: Consumed 1.152s CPU time.
Nov 25 20:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a4db4015816e3a04b13c789d8c50dd5b5ec2335f39b2676a15b0a0fd5d1288a-merged.mount: Deactivated successfully.
Nov 25 20:37:35 compute-0 podman[263114]: 2025-11-25 20:37:35.033232995 +0000 UTC m=+1.457313448 container remove 278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:35 compute-0 systemd[1]: libpod-conmon-278da79c03c42378f289cb43a07287ac894b4b227ed2184d75a1e20297bb5b92.scope: Deactivated successfully.
Nov 25 20:37:35 compute-0 sudo[263008]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:35 compute-0 sudo[263173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:35 compute-0 sudo[263173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:35 compute-0 sudo[263173]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:35 compute-0 sudo[263198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:37:35 compute-0 sudo[263198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:35 compute-0 sudo[263198]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:35 compute-0 sudo[263223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:35 compute-0 sudo[263223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:35 compute-0 sudo[263223]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:35 compute-0 sudo[263248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:37:35 compute-0 sudo[263248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v979: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:35 compute-0 podman[263313]: 2025-11-25 20:37:35.915105871 +0000 UTC m=+0.061954118 container create 683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:35 compute-0 systemd[1]: Started libpod-conmon-683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0.scope.
Nov 25 20:37:35 compute-0 podman[263313]: 2025-11-25 20:37:35.889874312 +0000 UTC m=+0.036722609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:36 compute-0 podman[263313]: 2025-11-25 20:37:36.024677189 +0000 UTC m=+0.171525446 container init 683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_panini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:37:36 compute-0 podman[263313]: 2025-11-25 20:37:36.036892957 +0000 UTC m=+0.183741214 container start 683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_panini, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:36 compute-0 podman[263313]: 2025-11-25 20:37:36.040850634 +0000 UTC m=+0.187698941 container attach 683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_panini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:37:36 compute-0 youthful_panini[263330]: 167 167
Nov 25 20:37:36 compute-0 systemd[1]: libpod-683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0.scope: Deactivated successfully.
Nov 25 20:37:36 compute-0 podman[263335]: 2025-11-25 20:37:36.103116859 +0000 UTC m=+0.041126728 container died 683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_panini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-571ae458a43f412788633b39fcbaf9993b0c3621f13142ee29592e727aefe3ac-merged.mount: Deactivated successfully.
Nov 25 20:37:36 compute-0 podman[263335]: 2025-11-25 20:37:36.148069919 +0000 UTC m=+0.086079748 container remove 683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 20:37:36 compute-0 systemd[1]: libpod-conmon-683f47e6ceac8064c877ee2f01047c6d57eef2b1c7b5cd18465198bd956126f0.scope: Deactivated successfully.
Nov 25 20:37:36 compute-0 podman[263356]: 2025-11-25 20:37:36.417826196 +0000 UTC m=+0.075048880 container create 3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:37:36 compute-0 systemd[1]: Started libpod-conmon-3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771.scope.
Nov 25 20:37:36 compute-0 podman[263356]: 2025-11-25 20:37:36.389316899 +0000 UTC m=+0.046539633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21422d5a3416e0be7c18ae069fbf0444e0a990f2a3eb4b9acba8e65ee7ed8eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21422d5a3416e0be7c18ae069fbf0444e0a990f2a3eb4b9acba8e65ee7ed8eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21422d5a3416e0be7c18ae069fbf0444e0a990f2a3eb4b9acba8e65ee7ed8eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21422d5a3416e0be7c18ae069fbf0444e0a990f2a3eb4b9acba8e65ee7ed8eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:36 compute-0 podman[263356]: 2025-11-25 20:37:36.523693934 +0000 UTC m=+0.180916618 container init 3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:36 compute-0 podman[263356]: 2025-11-25 20:37:36.543701152 +0000 UTC m=+0.200923836 container start 3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 20:37:36 compute-0 podman[263356]: 2025-11-25 20:37:36.550117435 +0000 UTC m=+0.207340119 container attach 3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:37:36 compute-0 ceph-mon[75144]: pgmap v979: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:37 compute-0 nova_compute[248866]: 2025-11-25 20:37:37.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:37 compute-0 infallible_darwin[263372]: {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:     "0": [
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:         {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "devices": [
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "/dev/loop3"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             ],
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_name": "ceph_lv0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_size": "21470642176",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "name": "ceph_lv0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "tags": {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cluster_name": "ceph",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.crush_device_class": "",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.encrypted": "0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osd_id": "0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.type": "block",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.vdo": "0"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             },
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "type": "block",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "vg_name": "ceph_vg0"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:         }
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:     ],
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:     "1": [
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:         {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "devices": [
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "/dev/loop4"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             ],
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_name": "ceph_lv1",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_size": "21470642176",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "name": "ceph_lv1",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "tags": {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cluster_name": "ceph",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.crush_device_class": "",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.encrypted": "0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osd_id": "1",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.type": "block",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.vdo": "0"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             },
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "type": "block",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "vg_name": "ceph_vg1"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:         }
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:     ],
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:     "2": [
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:         {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "devices": [
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "/dev/loop5"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             ],
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_name": "ceph_lv2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_size": "21470642176",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "name": "ceph_lv2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "tags": {
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.cluster_name": "ceph",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.crush_device_class": "",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.encrypted": "0",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osd_id": "2",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.type": "block",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:                 "ceph.vdo": "0"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             },
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "type": "block",
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:             "vg_name": "ceph_vg2"
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:         }
Nov 25 20:37:37 compute-0 infallible_darwin[263372]:     ]
Nov 25 20:37:37 compute-0 infallible_darwin[263372]: }
Nov 25 20:37:37 compute-0 systemd[1]: libpod-3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771.scope: Deactivated successfully.
Nov 25 20:37:37 compute-0 podman[263356]: 2025-11-25 20:37:37.31673001 +0000 UTC m=+0.973952684 container died 3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b21422d5a3416e0be7c18ae069fbf0444e0a990f2a3eb4b9acba8e65ee7ed8eb-merged.mount: Deactivated successfully.
Nov 25 20:37:37 compute-0 podman[263356]: 2025-11-25 20:37:37.413607047 +0000 UTC m=+1.070829721 container remove 3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:37 compute-0 systemd[1]: libpod-conmon-3eb8a08b496bd645df9489e487c8627e8e5a6afbd125ad2aa81122fc17e8f771.scope: Deactivated successfully.
Nov 25 20:37:37 compute-0 sudo[263248]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:37 compute-0 sudo[263394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:37 compute-0 sudo[263394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v980: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:37 compute-0 sudo[263394]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:37 compute-0 sudo[263419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:37:37 compute-0 sudo[263419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:37 compute-0 sudo[263419]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:37 compute-0 sudo[263444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:37 compute-0 sudo[263444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:37 compute-0 sudo[263444]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:37 compute-0 sudo[263469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:37:37 compute-0 sudo[263469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:38 compute-0 podman[263534]: 2025-11-25 20:37:38.34404751 +0000 UTC m=+0.071701171 container create c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:37:38 compute-0 systemd[1]: Started libpod-conmon-c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd.scope.
Nov 25 20:37:38 compute-0 podman[263534]: 2025-11-25 20:37:38.31356721 +0000 UTC m=+0.041220911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:38 compute-0 podman[263534]: 2025-11-25 20:37:38.449425484 +0000 UTC m=+0.177079185 container init c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:37:38 compute-0 podman[263534]: 2025-11-25 20:37:38.460657236 +0000 UTC m=+0.188310907 container start c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:37:38 compute-0 podman[263534]: 2025-11-25 20:37:38.464388287 +0000 UTC m=+0.192042058 container attach c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:37:38 compute-0 systemd[1]: libpod-c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd.scope: Deactivated successfully.
Nov 25 20:37:38 compute-0 fervent_rubin[263550]: 167 167
Nov 25 20:37:38 compute-0 conmon[263550]: conmon c4b4035606578ecf4896 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd.scope/container/memory.events
Nov 25 20:37:38 compute-0 podman[263555]: 2025-11-25 20:37:38.528370249 +0000 UTC m=+0.039999118 container died c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8da8bc2c6bb7817a0af521b4cd1bf8670e5820bfcf74c950c06863538a83aeb-merged.mount: Deactivated successfully.
Nov 25 20:37:38 compute-0 podman[263555]: 2025-11-25 20:37:38.576216026 +0000 UTC m=+0.087844855 container remove c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:37:38 compute-0 systemd[1]: libpod-conmon-c4b4035606578ecf48960da86b3a467fe44a9923d5317eed3abe656c1d3705fd.scope: Deactivated successfully.
Nov 25 20:37:38 compute-0 ceph-mon[75144]: pgmap v980: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:38 compute-0 podman[263577]: 2025-11-25 20:37:38.85178482 +0000 UTC m=+0.080937589 container create 38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:37:38 compute-0 podman[263577]: 2025-11-25 20:37:38.810475108 +0000 UTC m=+0.039627877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:37:38 compute-0 systemd[1]: Started libpod-conmon-38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e.scope.
Nov 25 20:37:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4ab0e033cddbff685bc905e8ff145bf86e5af3f1aefeb0f7721e72d2a6e19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4ab0e033cddbff685bc905e8ff145bf86e5af3f1aefeb0f7721e72d2a6e19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4ab0e033cddbff685bc905e8ff145bf86e5af3f1aefeb0f7721e72d2a6e19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4ab0e033cddbff685bc905e8ff145bf86e5af3f1aefeb0f7721e72d2a6e19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:37:39 compute-0 podman[263577]: 2025-11-25 20:37:39.019878392 +0000 UTC m=+0.249031231 container init 38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:37:39 compute-0 podman[263577]: 2025-11-25 20:37:39.032529412 +0000 UTC m=+0.261682191 container start 38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:37:39 compute-0 podman[263577]: 2025-11-25 20:37:39.036331955 +0000 UTC m=+0.265484814 container attach 38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:37:39 compute-0 nova_compute[248866]: 2025-11-25 20:37:39.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v981: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:40 compute-0 nova_compute[248866]: 2025-11-25 20:37:40.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:40 compute-0 nova_compute[248866]: 2025-11-25 20:37:40.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]: {
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "osd_id": 2,
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "type": "bluestore"
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:     },
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "osd_id": 1,
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "type": "bluestore"
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:     },
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "osd_id": 0,
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:         "type": "bluestore"
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]:     }
Nov 25 20:37:40 compute-0 eloquent_leakey[263594]: }
Nov 25 20:37:40 compute-0 systemd[1]: libpod-38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e.scope: Deactivated successfully.
Nov 25 20:37:40 compute-0 podman[263577]: 2025-11-25 20:37:40.110320749 +0000 UTC m=+1.339473488 container died 38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:37:40 compute-0 systemd[1]: libpod-38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e.scope: Consumed 1.087s CPU time.
Nov 25 20:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-aee4ab0e033cddbff685bc905e8ff145bf86e5af3f1aefeb0f7721e72d2a6e19-merged.mount: Deactivated successfully.
Nov 25 20:37:40 compute-0 podman[263577]: 2025-11-25 20:37:40.170857958 +0000 UTC m=+1.400010727 container remove 38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:37:40 compute-0 systemd[1]: libpod-conmon-38a1b1a44137ce01d189ac932f496c9c7c5fe8b777883cff436204e39d55ba5e.scope: Deactivated successfully.
Nov 25 20:37:40 compute-0 sudo[263469]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:37:40 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:37:40 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:40 compute-0 sudo[263637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:37:40 compute-0 sudo[263637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:40 compute-0 sudo[263637]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:40 compute-0 sudo[263662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:37:40 compute-0 sudo[263662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:37:40 compute-0 sudo[263662]: pam_unix(sudo:session): session closed for user root
Nov 25 20:37:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:40 compute-0 ceph-mon[75144]: pgmap v981: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:40 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:40 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:37:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v982: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:41 compute-0 ceph-mon[75144]: pgmap v982: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:42 compute-0 nova_compute[248866]: 2025-11-25 20:37:42.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:43 compute-0 nova_compute[248866]: 2025-11-25 20:37:43.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v983: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:44 compute-0 nova_compute[248866]: 2025-11-25 20:37:44.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:44 compute-0 ceph-mon[75144]: pgmap v983: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.084 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.085 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.085 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.086 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.086 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:37:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:37:45 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190391637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:37:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v984: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.586 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:37:45 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4190391637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.843 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.845 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5321MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.845 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.846 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.933 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.934 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:37:45 compute-0 nova_compute[248866]: 2025-11-25 20:37:45.966 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:37:46 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:37:46 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977376645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:37:46 compute-0 nova_compute[248866]: 2025-11-25 20:37:46.455 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:37:46 compute-0 nova_compute[248866]: 2025-11-25 20:37:46.463 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:37:46 compute-0 nova_compute[248866]: 2025-11-25 20:37:46.485 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:37:46 compute-0 nova_compute[248866]: 2025-11-25 20:37:46.487 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:37:46 compute-0 nova_compute[248866]: 2025-11-25 20:37:46.488 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:37:46 compute-0 ceph-mon[75144]: pgmap v984: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:46 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1977376645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:37:47 compute-0 nova_compute[248866]: 2025-11-25 20:37:47.488 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:47 compute-0 nova_compute[248866]: 2025-11-25 20:37:47.489 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:37:47 compute-0 nova_compute[248866]: 2025-11-25 20:37:47.489 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:37:47 compute-0 nova_compute[248866]: 2025-11-25 20:37:47.505 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:37:47 compute-0 nova_compute[248866]: 2025-11-25 20:37:47.505 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:47 compute-0 nova_compute[248866]: 2025-11-25 20:37:47.506 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:37:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v985: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:47 compute-0 ceph-mon[75144]: pgmap v985: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:37:48.953 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:37:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:37:48.954 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:37:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:37:48.954 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:37:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v986: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:50 compute-0 podman[263731]: 2025-11-25 20:37:50.001563534 +0000 UTC m=+0.089920523 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 20:37:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:50 compute-0 ceph-mon[75144]: pgmap v986: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v987: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:52 compute-0 ceph-mon[75144]: pgmap v987: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:52 compute-0 podman[263750]: 2025-11-25 20:37:52.997151624 +0000 UTC m=+0.089397319 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:37:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v988: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:53 compute-0 ceph-mon[75144]: pgmap v988: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:37:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v989: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:56 compute-0 ceph-mon[75144]: pgmap v989: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:37:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:37:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:37:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:37:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:37:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:37:57
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'images', 'volumes']
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:37:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v990: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:58 compute-0 ceph-mon[75144]: pgmap v990: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:37:59 compute-0 podman[263771]: 2025-11-25 20:37:59.035507712 +0000 UTC m=+0.125953548 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 20:37:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v991: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:00 compute-0 ceph-mon[75144]: pgmap v991: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v992: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:01 compute-0 ceph-mon[75144]: pgmap v992: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:38:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:38:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v993: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:04 compute-0 ceph-mon[75144]: pgmap v993: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v994: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:06 compute-0 ceph-mon[75144]: pgmap v994: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v995: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:08 compute-0 ceph-mon[75144]: pgmap v995: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v996: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:09 compute-0 ceph-mon[75144]: pgmap v996: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v997: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:11 compute-0 ceph-mon[75144]: pgmap v997: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v998: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:13 compute-0 ceph-mon[75144]: pgmap v998: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v999: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:16 compute-0 ceph-mon[75144]: pgmap v999: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:38:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1241153688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:38:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:38:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1241153688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:38:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1000: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1241153688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:38:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1241153688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:38:18 compute-0 ceph-mon[75144]: pgmap v1000: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1001: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:19 compute-0 ceph-mon[75144]: pgmap v1001: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:21 compute-0 podman[263798]: 2025-11-25 20:38:21.012163364 +0000 UTC m=+0.105746432 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:38:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1002: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:22 compute-0 ceph-mon[75144]: pgmap v1002: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1003: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:24 compute-0 podman[263817]: 2025-11-25 20:38:24.032529055 +0000 UTC m=+0.121493457 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:38:24 compute-0 ceph-mon[75144]: pgmap v1003: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1004: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:26 compute-0 ceph-mon[75144]: pgmap v1004: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:38:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:38:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:38:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:38:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:38:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:38:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1005: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:27 compute-0 ceph-mon[75144]: pgmap v1005: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1006: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:30 compute-0 podman[263837]: 2025-11-25 20:38:30.038792634 +0000 UTC m=+0.130483912 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 20:38:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:30 compute-0 ceph-mon[75144]: pgmap v1006: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1007: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:32 compute-0 ceph-mon[75144]: pgmap v1007: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1008: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:33 compute-0 ceph-mon[75144]: pgmap v1008: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1009: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:36 compute-0 ceph-mon[75144]: pgmap v1009: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1010: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:38 compute-0 ceph-mon[75144]: pgmap v1010: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:39 compute-0 nova_compute[248866]: 2025-11-25 20:38:39.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:39 compute-0 nova_compute[248866]: 2025-11-25 20:38:39.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1011: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:40 compute-0 sudo[263865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:40 compute-0 sudo[263865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:40 compute-0 sudo[263865]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:40 compute-0 sudo[263890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:38:40 compute-0 sudo[263890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:40 compute-0 sudo[263890]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:40 compute-0 ceph-mon[75144]: pgmap v1011: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:40 compute-0 sudo[263915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:40 compute-0 sudo[263915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:40 compute-0 sudo[263915]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:40 compute-0 sudo[263940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:38:40 compute-0 sudo[263940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:41 compute-0 nova_compute[248866]: 2025-11-25 20:38:41.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:41 compute-0 nova_compute[248866]: 2025-11-25 20:38:41.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:38:41 compute-0 sudo[263940]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 497fd789-8b54-45ea-801f-3989bad0da76 does not exist
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 807f9c1f-90f7-4551-b6b4-53e4f7d03444 does not exist
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d419ca47-9f13-4df9-83e1-29855b07bc4c does not exist
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:38:41 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1012: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:41 compute-0 sudo[263996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:41 compute-0 sudo[263996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:41 compute-0 sudo[263996]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:41 compute-0 sudo[264021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:38:41 compute-0 ceph-mon[75144]: Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:38:41 compute-0 ceph-mon[75144]: pgmap v1012: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:41 compute-0 sudo[264021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:41 compute-0 sudo[264021]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:41 compute-0 sudo[264046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:41 compute-0 sudo[264046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:41 compute-0 sudo[264046]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:41 compute-0 sudo[264071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:38:41 compute-0 sudo[264071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.265055325 +0000 UTC m=+0.061937887 container create b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:38:42 compute-0 systemd[1]: Started libpod-conmon-b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85.scope.
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.239699899 +0000 UTC m=+0.036582461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:38:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.372240885 +0000 UTC m=+0.169123457 container init b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.38128533 +0000 UTC m=+0.178167902 container start b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.386648615 +0000 UTC m=+0.183531167 container attach b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:38:42 compute-0 exciting_kowalevski[264153]: 167 167
Nov 25 20:38:42 compute-0 systemd[1]: libpod-b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85.scope: Deactivated successfully.
Nov 25 20:38:42 compute-0 conmon[264153]: conmon b9c153b11c9631df8065 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85.scope/container/memory.events
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.390783206 +0000 UTC m=+0.187665758 container died b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a00659369afe1683fe3beebbdbdd6130af508798646c82f3dafd924463165ae-merged.mount: Deactivated successfully.
Nov 25 20:38:42 compute-0 podman[264137]: 2025-11-25 20:38:42.441928099 +0000 UTC m=+0.238810631 container remove b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:38:42 compute-0 systemd[1]: libpod-conmon-b9c153b11c9631df806511ad77ec6e2eef7ce031dfdcdf583ec3ae11c33d4a85.scope: Deactivated successfully.
Nov 25 20:38:42 compute-0 podman[264178]: 2025-11-25 20:38:42.658721495 +0000 UTC m=+0.062023189 container create 4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:42 compute-0 systemd[1]: Started libpod-conmon-4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c.scope.
Nov 25 20:38:42 compute-0 podman[264178]: 2025-11-25 20:38:42.628263291 +0000 UTC m=+0.031565035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:38:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed3cf807ebbe71997277ca5565ae5eaaf89c100d56859af490bab4b6e11ba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed3cf807ebbe71997277ca5565ae5eaaf89c100d56859af490bab4b6e11ba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed3cf807ebbe71997277ca5565ae5eaaf89c100d56859af490bab4b6e11ba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed3cf807ebbe71997277ca5565ae5eaaf89c100d56859af490bab4b6e11ba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed3cf807ebbe71997277ca5565ae5eaaf89c100d56859af490bab4b6e11ba2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:42 compute-0 podman[264178]: 2025-11-25 20:38:42.777375655 +0000 UTC m=+0.180677419 container init 4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:38:42 compute-0 podman[264178]: 2025-11-25 20:38:42.786730498 +0000 UTC m=+0.190032182 container start 4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:38:42 compute-0 podman[264178]: 2025-11-25 20:38:42.792189716 +0000 UTC m=+0.195491360 container attach 4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:38:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1013: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:43 compute-0 flamboyant_vaughan[264194]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:38:43 compute-0 flamboyant_vaughan[264194]: --> relative data size: 1.0
Nov 25 20:38:43 compute-0 flamboyant_vaughan[264194]: --> All data devices are unavailable
Nov 25 20:38:43 compute-0 systemd[1]: libpod-4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c.scope: Deactivated successfully.
Nov 25 20:38:43 compute-0 podman[264178]: 2025-11-25 20:38:43.845575454 +0000 UTC m=+1.248877118 container died 4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:43 compute-0 systemd[1]: libpod-4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c.scope: Consumed 1.001s CPU time.
Nov 25 20:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ed3cf807ebbe71997277ca5565ae5eaaf89c100d56859af490bab4b6e11ba2-merged.mount: Deactivated successfully.
Nov 25 20:38:43 compute-0 podman[264178]: 2025-11-25 20:38:43.903688245 +0000 UTC m=+1.306989939 container remove 4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:38:43 compute-0 systemd[1]: libpod-conmon-4d4600283a5b5b763deccc11a9437ea5427c4ff9467987fdd7196fd77e260b3c.scope: Deactivated successfully.
Nov 25 20:38:43 compute-0 sudo[264071]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:44 compute-0 sudo[264237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:44 compute-0 sudo[264237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:44 compute-0 sudo[264237]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:44 compute-0 nova_compute[248866]: 2025-11-25 20:38:44.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:44 compute-0 nova_compute[248866]: 2025-11-25 20:38:44.045 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:44 compute-0 sudo[264262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:38:44 compute-0 sudo[264262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:44 compute-0 sudo[264262]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:44 compute-0 sudo[264287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:44 compute-0 sudo[264287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:44 compute-0 sudo[264287]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:44 compute-0 sudo[264312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:38:44 compute-0 sudo[264312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:44 compute-0 ceph-mon[75144]: pgmap v1013: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.72631182 +0000 UTC m=+0.058779981 container create 13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:38:44 compute-0 systemd[1]: Started libpod-conmon-13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13.scope.
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.702121885 +0000 UTC m=+0.034590076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:38:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.827128938 +0000 UTC m=+0.159597119 container init 13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.839832191 +0000 UTC m=+0.172300352 container start 13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.843711816 +0000 UTC m=+0.176179997 container attach 13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:38:44 compute-0 objective_nobel[264393]: 167 167
Nov 25 20:38:44 compute-0 systemd[1]: libpod-13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13.scope: Deactivated successfully.
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.85014183 +0000 UTC m=+0.182609981 container died 13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7cf4798259705dfdd5de1357db02ca16f217b0c98def07f5a71da93cb51703-merged.mount: Deactivated successfully.
Nov 25 20:38:44 compute-0 podman[264377]: 2025-11-25 20:38:44.89451176 +0000 UTC m=+0.226979911 container remove 13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:38:44 compute-0 systemd[1]: libpod-conmon-13296535f04a323685d1085701550bb264f3f4e4a8b5be50052017fca33d8d13.scope: Deactivated successfully.
Nov 25 20:38:45 compute-0 podman[264417]: 2025-11-25 20:38:45.153374673 +0000 UTC m=+0.072402019 container create a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:38:45 compute-0 systemd[1]: Started libpod-conmon-a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59.scope.
Nov 25 20:38:45 compute-0 podman[264417]: 2025-11-25 20:38:45.124983235 +0000 UTC m=+0.044010631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:38:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9813e805ad7de00570ff19b081c17f4896dc4bf8ab4e8fc234ac2d5d854333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9813e805ad7de00570ff19b081c17f4896dc4bf8ab4e8fc234ac2d5d854333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9813e805ad7de00570ff19b081c17f4896dc4bf8ab4e8fc234ac2d5d854333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9813e805ad7de00570ff19b081c17f4896dc4bf8ab4e8fc234ac2d5d854333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:45 compute-0 podman[264417]: 2025-11-25 20:38:45.261325784 +0000 UTC m=+0.180353190 container init a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:38:45 compute-0 podman[264417]: 2025-11-25 20:38:45.269364472 +0000 UTC m=+0.188391828 container start a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 20:38:45 compute-0 podman[264417]: 2025-11-25 20:38:45.273428301 +0000 UTC m=+0.192455647 container attach a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1014: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:46 compute-0 distracted_cerf[264433]: {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:     "0": [
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:         {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "devices": [
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "/dev/loop3"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             ],
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_name": "ceph_lv0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_size": "21470642176",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "name": "ceph_lv0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "tags": {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cluster_name": "ceph",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.crush_device_class": "",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.encrypted": "0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osd_id": "0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.type": "block",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.vdo": "0"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             },
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "type": "block",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "vg_name": "ceph_vg0"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:         }
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:     ],
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:     "1": [
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:         {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "devices": [
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "/dev/loop4"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             ],
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_name": "ceph_lv1",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_size": "21470642176",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "name": "ceph_lv1",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "tags": {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cluster_name": "ceph",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.crush_device_class": "",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.encrypted": "0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osd_id": "1",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.type": "block",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.vdo": "0"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             },
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "type": "block",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "vg_name": "ceph_vg1"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:         }
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:     ],
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:     "2": [
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:         {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "devices": [
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "/dev/loop5"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             ],
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_name": "ceph_lv2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_size": "21470642176",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "name": "ceph_lv2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "tags": {
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.cluster_name": "ceph",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.crush_device_class": "",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.encrypted": "0",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osd_id": "2",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.type": "block",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:                 "ceph.vdo": "0"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             },
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "type": "block",
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:             "vg_name": "ceph_vg2"
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:         }
Nov 25 20:38:46 compute-0 distracted_cerf[264433]:     ]
Nov 25 20:38:46 compute-0 distracted_cerf[264433]: }
Nov 25 20:38:46 compute-0 nova_compute[248866]: 2025-11-25 20:38:46.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:46 compute-0 systemd[1]: libpod-a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59.scope: Deactivated successfully.
Nov 25 20:38:46 compute-0 podman[264442]: 2025-11-25 20:38:46.110925518 +0000 UTC m=+0.033820716 container died a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec9813e805ad7de00570ff19b081c17f4896dc4bf8ab4e8fc234ac2d5d854333-merged.mount: Deactivated successfully.
Nov 25 20:38:46 compute-0 podman[264442]: 2025-11-25 20:38:46.172116144 +0000 UTC m=+0.095011262 container remove a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:38:46 compute-0 systemd[1]: libpod-conmon-a4ba5228a236081b960de1ae9f5769ac1cf680dcff2a43eae272b9cbf9fb7c59.scope: Deactivated successfully.
Nov 25 20:38:46 compute-0 sudo[264312]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:46 compute-0 sudo[264457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:46 compute-0 sudo[264457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:46 compute-0 sudo[264457]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:46 compute-0 sudo[264482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:38:46 compute-0 sudo[264482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:46 compute-0 sudo[264482]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:46 compute-0 sudo[264507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:46 compute-0 sudo[264507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:46 compute-0 sudo[264507]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:46 compute-0 sudo[264532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:38:46 compute-0 sudo[264532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:46 compute-0 ceph-mon[75144]: pgmap v1014: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:46 compute-0 podman[264597]: 2025-11-25 20:38:46.976857745 +0000 UTC m=+0.053919190 container create bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:47 compute-0 systemd[1]: Started libpod-conmon-bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8.scope.
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:47 compute-0 podman[264597]: 2025-11-25 20:38:46.95856099 +0000 UTC m=+0.035622425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:38:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:38:47 compute-0 podman[264597]: 2025-11-25 20:38:47.078142065 +0000 UTC m=+0.155203520 container init bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.078 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.078 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.079 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.079 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.079 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:38:47 compute-0 podman[264597]: 2025-11-25 20:38:47.0901439 +0000 UTC m=+0.167205345 container start bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:38:47 compute-0 interesting_clarke[264613]: 167 167
Nov 25 20:38:47 compute-0 systemd[1]: libpod-bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8.scope: Deactivated successfully.
Nov 25 20:38:47 compute-0 podman[264597]: 2025-11-25 20:38:47.098932028 +0000 UTC m=+0.175993493 container attach bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:38:47 compute-0 podman[264597]: 2025-11-25 20:38:47.099478162 +0000 UTC m=+0.176539617 container died bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b66314690b96847033643856a2a290a759ee5010c342f8c5cfc8d6bba6563b-merged.mount: Deactivated successfully.
Nov 25 20:38:47 compute-0 podman[264597]: 2025-11-25 20:38:47.162064485 +0000 UTC m=+0.239125940 container remove bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:38:47 compute-0 systemd[1]: libpod-conmon-bd9977092fe652063f8f95b3f131e3ce715d97532e3d766480faa6edd88a14a8.scope: Deactivated successfully.
Nov 25 20:38:47 compute-0 podman[264657]: 2025-11-25 20:38:47.434694431 +0000 UTC m=+0.067192119 container create 37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:38:47 compute-0 systemd[1]: Started libpod-conmon-37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465.scope.
Nov 25 20:38:47 compute-0 podman[264657]: 2025-11-25 20:38:47.413885678 +0000 UTC m=+0.046383346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:38:47 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:38:47 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729524569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:38:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.535 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f348163b1812e390c29ce0254dafde932c027a4091a3a86d59ec08d56744d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f348163b1812e390c29ce0254dafde932c027a4091a3a86d59ec08d56744d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f348163b1812e390c29ce0254dafde932c027a4091a3a86d59ec08d56744d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f348163b1812e390c29ce0254dafde932c027a4091a3a86d59ec08d56744d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:38:47 compute-0 podman[264657]: 2025-11-25 20:38:47.555197021 +0000 UTC m=+0.187694709 container init 37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:38:47 compute-0 podman[264657]: 2025-11-25 20:38:47.56993263 +0000 UTC m=+0.202430318 container start 37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:47 compute-0 podman[264657]: 2025-11-25 20:38:47.574855863 +0000 UTC m=+0.207353651 container attach 37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:38:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1015: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:47 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1729524569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:38:47 compute-0 ceph-mon[75144]: pgmap v1015: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.760 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.762 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5243MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.763 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:38:47 compute-0 nova_compute[248866]: 2025-11-25 20:38:47.764 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.060 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.061 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.105 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:38:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:38:48 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641291319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:38:48 compute-0 suspicious_carson[264674]: {
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "osd_id": 2,
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "type": "bluestore"
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:     },
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "osd_id": 1,
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "type": "bluestore"
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:     },
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "osd_id": 0,
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:         "type": "bluestore"
Nov 25 20:38:48 compute-0 suspicious_carson[264674]:     }
Nov 25 20:38:48 compute-0 suspicious_carson[264674]: }
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.590 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.598 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:38:48 compute-0 systemd[1]: libpod-37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465.scope: Deactivated successfully.
Nov 25 20:38:48 compute-0 podman[264657]: 2025-11-25 20:38:48.603303106 +0000 UTC m=+1.235800794 container died 37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:38:48 compute-0 systemd[1]: libpod-37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465.scope: Consumed 1.018s CPU time.
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.618 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.622 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:38:48 compute-0 nova_compute[248866]: 2025-11-25 20:38:48.622 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f348163b1812e390c29ce0254dafde932c027a4091a3a86d59ec08d56744d3-merged.mount: Deactivated successfully.
Nov 25 20:38:48 compute-0 podman[264657]: 2025-11-25 20:38:48.667717188 +0000 UTC m=+1.300214886 container remove 37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:38:48 compute-0 systemd[1]: libpod-conmon-37ef38fe15de2c1539221cb0d8f0d356fc9a971944821c6db3d37f42a759a465.scope: Deactivated successfully.
Nov 25 20:38:48 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3641291319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:38:48 compute-0 sudo[264532]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:38:48 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:38:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:38:48 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:38:48 compute-0 sudo[264744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:38:48 compute-0 sudo[264744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:48 compute-0 sudo[264744]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:48 compute-0 sudo[264769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:38:48 compute-0 sudo[264769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:38:48 compute-0 sudo[264769]: pam_unix(sudo:session): session closed for user root
Nov 25 20:38:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:38:48.954 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:38:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:38:48.955 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:38:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:38:48.955 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:38:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1016: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:49 compute-0 nova_compute[248866]: 2025-11-25 20:38:49.623 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:38:49 compute-0 nova_compute[248866]: 2025-11-25 20:38:49.623 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:38:49 compute-0 nova_compute[248866]: 2025-11-25 20:38:49.624 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:38:49 compute-0 nova_compute[248866]: 2025-11-25 20:38:49.647 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:38:49 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:38:49 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:38:49 compute-0 ceph-mon[75144]: pgmap v1016: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1017: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:51 compute-0 podman[264794]: 2025-11-25 20:38:51.994334784 +0000 UTC m=+0.085945525 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:38:52 compute-0 ceph-mon[75144]: pgmap v1017: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1018: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:54 compute-0 ceph-mon[75144]: pgmap v1018: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:54 compute-0 podman[264814]: 2025-11-25 20:38:54.973843979 +0000 UTC m=+0.074375594 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:38:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:38:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1019: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:55 compute-0 ceph-mon[75144]: pgmap v1019: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:38:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:38:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:38:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:38:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:38:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:38:57
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'images']
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:38:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1020: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:58 compute-0 ceph-mon[75144]: pgmap v1020: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:38:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1021: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:00 compute-0 ceph-mon[75144]: pgmap v1021: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:01 compute-0 podman[264834]: 2025-11-25 20:39:01.025187079 +0000 UTC m=+0.125412135 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 20:39:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1022: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:01 compute-0 ceph-mon[75144]: pgmap v1022: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:39:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:39:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1023: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:04 compute-0 ceph-mon[75144]: pgmap v1023: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1024: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:06 compute-0 ceph-mon[75144]: pgmap v1024: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1025: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:07 compute-0 ceph-mon[75144]: pgmap v1025: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1026: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:10 compute-0 ceph-mon[75144]: pgmap v1026: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1027: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:12 compute-0 ceph-mon[75144]: pgmap v1027: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1028: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:13 compute-0 ceph-mon[75144]: pgmap v1028: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1029: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:16 compute-0 ceph-mon[75144]: pgmap v1029: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:39:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295024092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:39:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:39:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295024092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:39:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1030: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/295024092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:39:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/295024092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:39:18 compute-0 ceph-mon[75144]: pgmap v1030: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1031: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:19 compute-0 ceph-mon[75144]: pgmap v1031: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1032: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:22 compute-0 ceph-mon[75144]: pgmap v1032: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:22 compute-0 podman[264861]: 2025-11-25 20:39:22.983465143 +0000 UTC m=+0.078148945 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:39:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1033: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:24 compute-0 ceph-mon[75144]: pgmap v1033: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1034: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:25 compute-0 ceph-mon[75144]: pgmap v1034: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:25 compute-0 podman[264881]: 2025-11-25 20:39:25.995224872 +0000 UTC m=+0.096779540 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:39:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:39:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:39:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:39:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:39:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:39:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:39:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1035: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:28 compute-0 ceph-mon[75144]: pgmap v1035: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1036: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:29 compute-0 ceph-mon[75144]: pgmap v1036: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1037: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:32 compute-0 podman[264901]: 2025-11-25 20:39:32.441499476 +0000 UTC m=+0.117785987 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:39:32 compute-0 ceph-mon[75144]: pgmap v1037: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1038: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:34 compute-0 ceph-mon[75144]: pgmap v1038: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1039: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:35 compute-0 ceph-mon[75144]: pgmap v1039: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1040: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:37 compute-0 ceph-mon[75144]: pgmap v1040: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:39 compute-0 nova_compute[248866]: 2025-11-25 20:39:39.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1041: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:40 compute-0 ceph-mon[75144]: pgmap v1041: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:41 compute-0 nova_compute[248866]: 2025-11-25 20:39:41.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1042: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:41 compute-0 ceph-mon[75144]: pgmap v1042: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:43 compute-0 nova_compute[248866]: 2025-11-25 20:39:43.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:43 compute-0 nova_compute[248866]: 2025-11-25 20:39:43.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:39:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1043: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:44 compute-0 nova_compute[248866]: 2025-11-25 20:39:44.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:44 compute-0 nova_compute[248866]: 2025-11-25 20:39:44.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:44 compute-0 ceph-mon[75144]: pgmap v1043: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1044: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:45 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:39:46 compute-0 nova_compute[248866]: 2025-11-25 20:39:46.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:46 compute-0 ceph-mon[75144]: pgmap v1044: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.095 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.096 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.096 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.096 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.097 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:39:47 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:39:47 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530123254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.617 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:39:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1045: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:47 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2530123254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:39:47 compute-0 ceph-mon[75144]: pgmap v1045: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.854 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.856 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5321MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.857 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.857 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.943 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.944 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:39:47 compute-0 nova_compute[248866]: 2025-11-25 20:39:47.968 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:39:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:39:48 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306838913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:39:48 compute-0 nova_compute[248866]: 2025-11-25 20:39:48.440 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:39:48 compute-0 nova_compute[248866]: 2025-11-25 20:39:48.448 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:39:48 compute-0 nova_compute[248866]: 2025-11-25 20:39:48.533 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:39:48 compute-0 nova_compute[248866]: 2025-11-25 20:39:48.536 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:39:48 compute-0 nova_compute[248866]: 2025-11-25 20:39:48.536 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:39:48 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3306838913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:39:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:39:48.956 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:39:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:39:48.956 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:39:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:39:48.957 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:39:48 compute-0 sudo[264973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:48 compute-0 sudo[264973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:48 compute-0 sudo[264973]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:49 compute-0 sudo[264998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:39:49 compute-0 sudo[264998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:49 compute-0 sudo[264998]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:49 compute-0 sudo[265023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:49 compute-0 sudo[265023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:49 compute-0 sudo[265023]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:49 compute-0 sudo[265048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:39:49 compute-0 sudo[265048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:49 compute-0 nova_compute[248866]: 2025-11-25 20:39:49.537 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:49 compute-0 nova_compute[248866]: 2025-11-25 20:39:49.538 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:39:49 compute-0 nova_compute[248866]: 2025-11-25 20:39:49.538 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:39:49 compute-0 nova_compute[248866]: 2025-11-25 20:39:49.564 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:39:49 compute-0 nova_compute[248866]: 2025-11-25 20:39:49.565 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:39:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1046: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:49 compute-0 ceph-mon[75144]: pgmap v1046: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:49 compute-0 sudo[265048]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:39:49 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:39:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:39:49 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:39:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:39:49 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:39:49 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 9d914dd2-56e2-486a-87e2-7e98dee7bbcc does not exist
Nov 25 20:39:49 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev a4ed091c-c77a-4664-9906-48128e81ee03 does not exist
Nov 25 20:39:49 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 54f0a324-76a2-4b77-b5c9-bf78607cf4da does not exist
Nov 25 20:39:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:39:49 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:39:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:39:49 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:39:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:39:49 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:39:49 compute-0 sudo[265103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:49 compute-0 sudo[265103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:49 compute-0 sudo[265103]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:50 compute-0 sudo[265128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:39:50 compute-0 sudo[265128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:50 compute-0 sudo[265128]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:50 compute-0 sudo[265153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:50 compute-0 sudo[265153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:50 compute-0 sudo[265153]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:50 compute-0 sudo[265178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:39:50 compute-0 sudo[265178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.624080254 +0000 UTC m=+0.057135426 container create 0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:39:50 compute-0 systemd[1]: Started libpod-conmon-0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875.scope.
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.595318797 +0000 UTC m=+0.028374019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:39:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.729757404 +0000 UTC m=+0.162812626 container init 0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhabha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:39:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:39:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:39:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:39:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:39:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:39:50 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.742305153 +0000 UTC m=+0.175360325 container start 0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.746430815 +0000 UTC m=+0.179485987 container attach 0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:39:50 compute-0 beautiful_bhabha[265259]: 167 167
Nov 25 20:39:50 compute-0 systemd[1]: libpod-0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875.scope: Deactivated successfully.
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.750360691 +0000 UTC m=+0.183415833 container died 0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhabha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6eba2d59936117d094985657108526323bd60be072c36309f9a998993a8c40c-merged.mount: Deactivated successfully.
Nov 25 20:39:50 compute-0 podman[265243]: 2025-11-25 20:39:50.793810637 +0000 UTC m=+0.226865789 container remove 0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhabha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:39:50 compute-0 systemd[1]: libpod-conmon-0b1882b38cf2cd86baba9dcb16bf9f48797d3e007a61242f35e07741f05fe875.scope: Deactivated successfully.
Nov 25 20:39:51 compute-0 podman[265283]: 2025-11-25 20:39:51.021488966 +0000 UTC m=+0.066385648 container create 978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_merkle, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:39:51 compute-0 systemd[1]: Started libpod-conmon-978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450.scope.
Nov 25 20:39:51 compute-0 podman[265283]: 2025-11-25 20:39:50.991760561 +0000 UTC m=+0.036657323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:39:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2d404d674d03f20cfc52f5e74ce768343c632ed522a086f6f2e74b97734b4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2d404d674d03f20cfc52f5e74ce768343c632ed522a086f6f2e74b97734b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2d404d674d03f20cfc52f5e74ce768343c632ed522a086f6f2e74b97734b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2d404d674d03f20cfc52f5e74ce768343c632ed522a086f6f2e74b97734b4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2d404d674d03f20cfc52f5e74ce768343c632ed522a086f6f2e74b97734b4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:51 compute-0 podman[265283]: 2025-11-25 20:39:51.132366296 +0000 UTC m=+0.177263048 container init 978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:39:51 compute-0 podman[265283]: 2025-11-25 20:39:51.143488587 +0000 UTC m=+0.188385299 container start 978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:39:51 compute-0 podman[265283]: 2025-11-25 20:39:51.148711818 +0000 UTC m=+0.193608580 container attach 978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_merkle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:39:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1047: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:51 compute-0 ceph-mon[75144]: pgmap v1047: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:52 compute-0 mystifying_merkle[265300]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:39:52 compute-0 mystifying_merkle[265300]: --> relative data size: 1.0
Nov 25 20:39:52 compute-0 mystifying_merkle[265300]: --> All data devices are unavailable
Nov 25 20:39:52 compute-0 systemd[1]: libpod-978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450.scope: Deactivated successfully.
Nov 25 20:39:52 compute-0 podman[265283]: 2025-11-25 20:39:52.386277378 +0000 UTC m=+1.431174090 container died 978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:39:52 compute-0 systemd[1]: libpod-978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450.scope: Consumed 1.201s CPU time.
Nov 25 20:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d2d404d674d03f20cfc52f5e74ce768343c632ed522a086f6f2e74b97734b4a-merged.mount: Deactivated successfully.
Nov 25 20:39:52 compute-0 podman[265283]: 2025-11-25 20:39:52.463540488 +0000 UTC m=+1.508437190 container remove 978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:39:52 compute-0 systemd[1]: libpod-conmon-978f052d726b4f2204549ae099e1c8fb56ad1c047fd5b22da62f794c3b7a8450.scope: Deactivated successfully.
Nov 25 20:39:52 compute-0 sudo[265178]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:52 compute-0 sudo[265341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:52 compute-0 sudo[265341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:52 compute-0 sudo[265341]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:52 compute-0 sudo[265366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:39:52 compute-0 sudo[265366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:52 compute-0 sudo[265366]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:52 compute-0 sudo[265391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:52 compute-0 sudo[265391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:52 compute-0 sudo[265391]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:52 compute-0 sudo[265416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:39:52 compute-0 sudo[265416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.340285357 +0000 UTC m=+0.068629978 container create f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:39:53 compute-0 systemd[1]: Started libpod-conmon-f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab.scope.
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.310167692 +0000 UTC m=+0.038512393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:39:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.424373572 +0000 UTC m=+0.152718263 container init f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.436787657 +0000 UTC m=+0.165132278 container start f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.44166351 +0000 UTC m=+0.170008161 container attach f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ishizaka, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:39:53 compute-0 determined_ishizaka[265498]: 167 167
Nov 25 20:39:53 compute-0 systemd[1]: libpod-f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab.scope: Deactivated successfully.
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.445651927 +0000 UTC m=+0.173996568 container died f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ishizaka, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7c4846b76e542bf36ab5c70d09d8c9de85036c0b852c7909d6f2c33d88205dc-merged.mount: Deactivated successfully.
Nov 25 20:39:53 compute-0 podman[265481]: 2025-11-25 20:39:53.494929391 +0000 UTC m=+0.223274042 container remove f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:39:53 compute-0 systemd[1]: libpod-conmon-f36d397d4945555efd2b73dec7c9f2023e7d69c84e41ab976c229adb9925b9ab.scope: Deactivated successfully.
Nov 25 20:39:53 compute-0 podman[265495]: 2025-11-25 20:39:53.519849125 +0000 UTC m=+0.127809889 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:39:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1048: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:53 compute-0 podman[265540]: 2025-11-25 20:39:53.768261506 +0000 UTC m=+0.074714363 container create 6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:39:53 compute-0 systemd[1]: Started libpod-conmon-6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6.scope.
Nov 25 20:39:53 compute-0 podman[265540]: 2025-11-25 20:39:53.740238158 +0000 UTC m=+0.046691065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:39:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca5444d0b0deab93f52fd5120dd4409f0358522c0bf24642144612f68397010/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca5444d0b0deab93f52fd5120dd4409f0358522c0bf24642144612f68397010/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca5444d0b0deab93f52fd5120dd4409f0358522c0bf24642144612f68397010/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca5444d0b0deab93f52fd5120dd4409f0358522c0bf24642144612f68397010/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:53 compute-0 podman[265540]: 2025-11-25 20:39:53.881943011 +0000 UTC m=+0.188395868 container init 6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:39:53 compute-0 podman[265540]: 2025-11-25 20:39:53.897687967 +0000 UTC m=+0.204140824 container start 6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:39:53 compute-0 podman[265540]: 2025-11-25 20:39:53.901654704 +0000 UTC m=+0.208107551 container attach 6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]: {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:     "0": [
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:         {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "devices": [
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "/dev/loop3"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             ],
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_name": "ceph_lv0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_size": "21470642176",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "name": "ceph_lv0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "tags": {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cluster_name": "ceph",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.crush_device_class": "",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.encrypted": "0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osd_id": "0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.type": "block",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.vdo": "0"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             },
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "type": "block",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "vg_name": "ceph_vg0"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:         }
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:     ],
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:     "1": [
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:         {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "devices": [
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "/dev/loop4"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             ],
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_name": "ceph_lv1",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_size": "21470642176",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "name": "ceph_lv1",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "tags": {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cluster_name": "ceph",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.crush_device_class": "",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.encrypted": "0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osd_id": "1",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.type": "block",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.vdo": "0"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             },
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "type": "block",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "vg_name": "ceph_vg1"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:         }
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:     ],
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:     "2": [
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:         {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "devices": [
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "/dev/loop5"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             ],
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_name": "ceph_lv2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_size": "21470642176",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "name": "ceph_lv2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "tags": {
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.cluster_name": "ceph",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.crush_device_class": "",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.encrypted": "0",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osd_id": "2",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.type": "block",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:                 "ceph.vdo": "0"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             },
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "type": "block",
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:             "vg_name": "ceph_vg2"
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:         }
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]:     ]
Nov 25 20:39:54 compute-0 pedantic_wilson[265557]: }
Nov 25 20:39:54 compute-0 ceph-mon[75144]: pgmap v1048: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:54 compute-0 systemd[1]: libpod-6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6.scope: Deactivated successfully.
Nov 25 20:39:54 compute-0 podman[265566]: 2025-11-25 20:39:54.750864518 +0000 UTC m=+0.031519184 container died 6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca5444d0b0deab93f52fd5120dd4409f0358522c0bf24642144612f68397010-merged.mount: Deactivated successfully.
Nov 25 20:39:54 compute-0 podman[265566]: 2025-11-25 20:39:54.805090685 +0000 UTC m=+0.085745331 container remove 6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:39:54 compute-0 systemd[1]: libpod-conmon-6653a2017a722b11f3c926cbc198bc67b542e11725f87e756aa5a475bcf429e6.scope: Deactivated successfully.
Nov 25 20:39:54 compute-0 sudo[265416]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:54 compute-0 sudo[265581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:54 compute-0 sudo[265581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:54 compute-0 sudo[265581]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:55 compute-0 sudo[265606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:39:55 compute-0 sudo[265606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:55 compute-0 sudo[265606]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:55 compute-0 sudo[265631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:55 compute-0 sudo[265631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:55 compute-0 sudo[265631]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:55 compute-0 sudo[265656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:39:55 compute-0 sudo[265656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.551037895 +0000 UTC m=+0.053743835 container create 65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:39:55 compute-0 systemd[1]: Started libpod-conmon-65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22.scope.
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.524261 +0000 UTC m=+0.026967060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:39:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:39:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1049: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.643203709 +0000 UTC m=+0.145909639 container init 65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.654263398 +0000 UTC m=+0.156969328 container start 65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.657915836 +0000 UTC m=+0.160621776 container attach 65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pascal, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:39:55 compute-0 kind_pascal[265737]: 167 167
Nov 25 20:39:55 compute-0 systemd[1]: libpod-65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22.scope: Deactivated successfully.
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.661257947 +0000 UTC m=+0.163963887 container died 65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-112cccb3b310373056618eefda31eb3f9b83537fbcb58a9b9adbaae1908a6692-merged.mount: Deactivated successfully.
Nov 25 20:39:55 compute-0 podman[265721]: 2025-11-25 20:39:55.706224213 +0000 UTC m=+0.208930113 container remove 65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 20:39:55 compute-0 systemd[1]: libpod-conmon-65c27c8a6f5b94fc563b6994b8acaad2ba4bfe08401370bf6984082b900eff22.scope: Deactivated successfully.
Nov 25 20:39:55 compute-0 podman[265761]: 2025-11-25 20:39:55.914457417 +0000 UTC m=+0.057175168 container create 7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:39:55 compute-0 systemd[1]: Started libpod-conmon-7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572.scope.
Nov 25 20:39:55 compute-0 podman[265761]: 2025-11-25 20:39:55.889093481 +0000 UTC m=+0.031811242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:39:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b93e08cf28b1e7a3f89a65b28ceb3f16889c77848f72ee1eef927697be433ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b93e08cf28b1e7a3f89a65b28ceb3f16889c77848f72ee1eef927697be433ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b93e08cf28b1e7a3f89a65b28ceb3f16889c77848f72ee1eef927697be433ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b93e08cf28b1e7a3f89a65b28ceb3f16889c77848f72ee1eef927697be433ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:39:56 compute-0 podman[265761]: 2025-11-25 20:39:56.040273661 +0000 UTC m=+0.182991422 container init 7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 25 20:39:56 compute-0 podman[265761]: 2025-11-25 20:39:56.052882932 +0000 UTC m=+0.195600683 container start 7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:39:56 compute-0 podman[265761]: 2025-11-25 20:39:56.05725104 +0000 UTC m=+0.199968851 container attach 7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:39:56 compute-0 podman[265782]: 2025-11-25 20:39:56.14045161 +0000 UTC m=+0.104429385 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Nov 25 20:39:56 compute-0 ceph-mon[75144]: pgmap v1049: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:39:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:39:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:39:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:39:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:39:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:39:57
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms']
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:39:57 compute-0 adoring_babbage[265779]: {
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "osd_id": 2,
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "type": "bluestore"
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:     },
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "osd_id": 1,
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "type": "bluestore"
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:     },
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "osd_id": 0,
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:         "type": "bluestore"
Nov 25 20:39:57 compute-0 adoring_babbage[265779]:     }
Nov 25 20:39:57 compute-0 adoring_babbage[265779]: }
Nov 25 20:39:57 compute-0 systemd[1]: libpod-7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572.scope: Deactivated successfully.
Nov 25 20:39:57 compute-0 systemd[1]: libpod-7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572.scope: Consumed 1.157s CPU time.
Nov 25 20:39:57 compute-0 podman[265832]: 2025-11-25 20:39:57.261772217 +0000 UTC m=+0.036541400 container died 7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b93e08cf28b1e7a3f89a65b28ceb3f16889c77848f72ee1eef927697be433ee-merged.mount: Deactivated successfully.
Nov 25 20:39:57 compute-0 podman[265832]: 2025-11-25 20:39:57.324237536 +0000 UTC m=+0.099006639 container remove 7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:39:57 compute-0 systemd[1]: libpod-conmon-7de8d0fb70ac4da6a497663863fc48c299de1267c66384d9ef0ddc44ac632572.scope: Deactivated successfully.
Nov 25 20:39:57 compute-0 sudo[265656]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:39:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:39:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:39:57 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:39:57 compute-0 sudo[265848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:39:57 compute-0 sudo[265848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:57 compute-0 sudo[265848]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:57 compute-0 sudo[265873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:39:57 compute-0 sudo[265873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:39:57 compute-0 sudo[265873]: pam_unix(sudo:session): session closed for user root
Nov 25 20:39:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1050: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:39:58 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:39:58 compute-0 ceph-mon[75144]: pgmap v1050: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:39:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1051: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:00 compute-0 ceph-mon[75144]: pgmap v1051: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1052: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:01 compute-0 ceph-mon[75144]: pgmap v1052: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:40:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:40:03 compute-0 podman[265898]: 2025-11-25 20:40:03.03050617 +0000 UTC m=+0.123101282 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:40:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1053: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:04 compute-0 ceph-mon[75144]: pgmap v1053: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1054: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:05 compute-0 ceph-mon[75144]: pgmap v1054: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1055: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:08 compute-0 ceph-mon[75144]: pgmap v1055: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1056: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:09 compute-0 ceph-mon[75144]: pgmap v1056: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1057: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:12 compute-0 ceph-mon[75144]: pgmap v1057: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1058: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:13 compute-0 ceph-mon[75144]: pgmap v1058: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1059: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:16 compute-0 ceph-mon[75144]: pgmap v1059: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:40:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2888620506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:40:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:40:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2888620506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:40:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1060: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2888620506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:40:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2888620506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:40:17 compute-0 ceph-mon[75144]: pgmap v1060: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1061: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:20 compute-0 ceph-mon[75144]: pgmap v1061: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1062: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:21 compute-0 ceph-mon[75144]: pgmap v1062: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1063: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:23 compute-0 podman[265924]: 2025-11-25 20:40:23.992738618 +0000 UTC m=+0.081506326 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:40:24 compute-0 ceph-mon[75144]: pgmap v1063: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1064: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:25 compute-0 ceph-mon[75144]: pgmap v1064: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:40:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:40:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:40:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:40:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:40:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:40:26 compute-0 podman[265945]: 2025-11-25 20:40:26.99022935 +0000 UTC m=+0.081851476 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:40:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1065: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:28 compute-0 ceph-mon[75144]: pgmap v1065: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1066: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:29 compute-0 ceph-mon[75144]: pgmap v1066: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1067: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:32 compute-0 ceph-mon[75144]: pgmap v1067: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1068: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:33 compute-0 ceph-mon[75144]: pgmap v1068: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:34 compute-0 podman[265968]: 2025-11-25 20:40:34.027005698 +0000 UTC m=+0.122883316 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:40:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1069: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:35 compute-0 ceph-mon[75144]: pgmap v1069: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:37 compute-0 nova_compute[248866]: 2025-11-25 20:40:37.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:37 compute-0 nova_compute[248866]: 2025-11-25 20:40:37.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 20:40:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1070: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:37 compute-0 ceph-mon[75144]: pgmap v1070: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:39 compute-0 nova_compute[248866]: 2025-11-25 20:40:39.055 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:39 compute-0 nova_compute[248866]: 2025-11-25 20:40:39.055 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 20:40:39 compute-0 nova_compute[248866]: 2025-11-25 20:40:39.528 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 20:40:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1071: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:39 compute-0 ceph-mon[75144]: pgmap v1071: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:41 compute-0 nova_compute[248866]: 2025-11-25 20:40:41.512 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1072: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:41 compute-0 ceph-mon[75144]: pgmap v1072: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:42 compute-0 nova_compute[248866]: 2025-11-25 20:40:42.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1073: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:43 compute-0 ceph-mon[75144]: pgmap v1073: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:45 compute-0 nova_compute[248866]: 2025-11-25 20:40:45.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:45 compute-0 nova_compute[248866]: 2025-11-25 20:40:45.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:45 compute-0 nova_compute[248866]: 2025-11-25 20:40:45.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:40:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1074: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:45 compute-0 ceph-mon[75144]: pgmap v1074: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:46 compute-0 nova_compute[248866]: 2025-11-25 20:40:46.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.078 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.079 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.079 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.080 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.080 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:40:47 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:40:47 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879636139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.496 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:40:47 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2879636139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:40:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1075: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.678 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.680 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5311MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.680 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.680 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.754 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.754 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:40:47 compute-0 nova_compute[248866]: 2025-11-25 20:40:47.779 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:40:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:40:48 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3929545535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:40:48 compute-0 nova_compute[248866]: 2025-11-25 20:40:48.178 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:40:48 compute-0 nova_compute[248866]: 2025-11-25 20:40:48.187 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:40:48 compute-0 nova_compute[248866]: 2025-11-25 20:40:48.208 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:40:48 compute-0 nova_compute[248866]: 2025-11-25 20:40:48.211 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:40:48 compute-0 nova_compute[248866]: 2025-11-25 20:40:48.212 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:40:48 compute-0 ceph-mon[75144]: pgmap v1075: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:48 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3929545535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:40:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:40:48.956 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:40:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:40:48.957 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:40:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:40:48.958 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:40:49 compute-0 nova_compute[248866]: 2025-11-25 20:40:49.215 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:49 compute-0 nova_compute[248866]: 2025-11-25 20:40:49.216 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1076: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:49 compute-0 ceph-mon[75144]: pgmap v1076: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:50 compute-0 nova_compute[248866]: 2025-11-25 20:40:50.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:50 compute-0 nova_compute[248866]: 2025-11-25 20:40:50.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:40:50 compute-0 nova_compute[248866]: 2025-11-25 20:40:50.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:40:50 compute-0 nova_compute[248866]: 2025-11-25 20:40:50.063 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:40:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1077: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:51 compute-0 ceph-mon[75144]: pgmap v1077: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1078: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:53 compute-0 ceph-mon[75144]: pgmap v1078: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:54 compute-0 nova_compute[248866]: 2025-11-25 20:40:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:40:54 compute-0 podman[266039]: 2025-11-25 20:40:54.969918654 +0000 UTC m=+0.067443127 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 25 20:40:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:40:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1079: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:55 compute-0 ceph-mon[75144]: pgmap v1079: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:40:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:40:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:40:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:40:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:40:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:40:57
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'vms', 'images']
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:40:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1080: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:57 compute-0 sudo[266058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:40:57 compute-0 sudo[266058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:57 compute-0 sudo[266058]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.718573) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103257718674, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2300, "num_deletes": 501, "total_data_size": 2318851, "memory_usage": 2367456, "flush_reason": "Manual Compaction"}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103257731646, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1621008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20093, "largest_seqno": 22392, "table_properties": {"data_size": 1612991, "index_size": 4131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 21896, "raw_average_key_size": 19, "raw_value_size": 1593961, "raw_average_value_size": 1445, "num_data_blocks": 189, "num_entries": 1103, "num_filter_entries": 1103, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103031, "oldest_key_time": 1764103031, "file_creation_time": 1764103257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13097 microseconds, and 8037 cpu microseconds.
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.731691) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1621008 bytes OK
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.731713) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.733580) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.733596) EVENT_LOG_v1 {"time_micros": 1764103257733591, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.733614) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2308274, prev total WAL file size 2308274, number of live WAL files 2.
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.734603) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1583KB)], [50(5376KB)]
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103257734731, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 7126152, "oldest_snapshot_seqno": -1}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 3901 keys, 4523134 bytes, temperature: kUnknown
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103257771247, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 4523134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4498644, "index_size": 13629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94093, "raw_average_key_size": 24, "raw_value_size": 4430090, "raw_average_value_size": 1135, "num_data_blocks": 580, "num_entries": 3901, "num_filter_entries": 3901, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:40:57 compute-0 ceph-mon[75144]: pgmap v1080: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.771585) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 4523134 bytes
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.773657) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.5 rd, 123.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 5.3 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(7.2) write-amplify(2.8) OK, records in: 4840, records dropped: 939 output_compression: NoCompression
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.773708) EVENT_LOG_v1 {"time_micros": 1764103257773688, "job": 26, "event": "compaction_finished", "compaction_time_micros": 36630, "compaction_time_cpu_micros": 21564, "output_level": 6, "num_output_files": 1, "total_output_size": 4523134, "num_input_records": 4840, "num_output_records": 3901, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103257774315, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103257775973, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.734464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.776024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.776029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.776031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.776033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:40:57 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:40:57.776036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:40:57 compute-0 sudo[266084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:40:57 compute-0 sudo[266084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:57 compute-0 sudo[266084]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:57 compute-0 podman[266082]: 2025-11-25 20:40:57.81750833 +0000 UTC m=+0.105035413 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:40:57 compute-0 sudo[266129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:40:57 compute-0 sudo[266129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:57 compute-0 sudo[266129]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:57 compute-0 sudo[266154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:40:57 compute-0 sudo[266154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:58 compute-0 podman[266252]: 2025-11-25 20:40:58.670544628 +0000 UTC m=+0.099277057 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 20:40:58 compute-0 podman[266252]: 2025-11-25 20:40:58.788341615 +0000 UTC m=+0.217073994 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:40:59 compute-0 sudo[266154]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:40:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:40:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:40:59 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:40:59 compute-0 sudo[266375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:40:59 compute-0 sudo[266375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:59 compute-0 sudo[266375]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:59 compute-0 sudo[266400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:40:59 compute-0 sudo[266400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:59 compute-0 sudo[266400]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:59 compute-0 sudo[266425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:40:59 compute-0 sudo[266425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:40:59 compute-0 sudo[266425]: pam_unix(sudo:session): session closed for user root
Nov 25 20:40:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1081: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:40:59 compute-0 sudo[266450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:40:59 compute-0 sudo[266450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:00 compute-0 sudo[266450]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:00 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:00 compute-0 ceph-mon[75144]: pgmap v1081: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:41:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:41:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:41:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:00 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d93aa4c2-3641-4edd-9cdf-0543ad537f56 does not exist
Nov 25 20:41:00 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 0e4888eb-04f7-40c1-b8f1-c8ca4d96e939 does not exist
Nov 25 20:41:00 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d2d0a252-326f-4c20-85d6-d21fdf6aebe2 does not exist
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:41:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:41:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:41:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:41:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:00 compute-0 sudo[266507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:00 compute-0 sudo[266507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:00 compute-0 sudo[266507]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:00 compute-0 sudo[266532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:41:00 compute-0 sudo[266532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:00 compute-0 sudo[266532]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:00 compute-0 sudo[266557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:00 compute-0 sudo[266557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:00 compute-0 sudo[266557]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:00 compute-0 sudo[266582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:41:00 compute-0 sudo[266582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.194628823 +0000 UTC m=+0.071538857 container create 6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bouman, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:41:01 compute-0 systemd[1]: Started libpod-conmon-6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57.scope.
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.163484029 +0000 UTC m=+0.040394073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:41:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.307703812 +0000 UTC m=+0.184613896 container init 6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.319458709 +0000 UTC m=+0.196368754 container start 6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.323868639 +0000 UTC m=+0.200778693 container attach 6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:41:01 compute-0 heuristic_bouman[266665]: 167 167
Nov 25 20:41:01 compute-0 systemd[1]: libpod-6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57.scope: Deactivated successfully.
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.3283619 +0000 UTC m=+0.205271904 container died 6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bouman, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:41:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e85126622550d0c7f7776362821149279fcd382b9f6ce0be614ad3924722eb5-merged.mount: Deactivated successfully.
Nov 25 20:41:01 compute-0 podman[266648]: 2025-11-25 20:41:01.381017965 +0000 UTC m=+0.257928009 container remove 6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:41:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:41:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:41:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:41:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:41:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:41:01 compute-0 systemd[1]: libpod-conmon-6c1a2c5ba0592f28fa4dad04386b857e71b02a255d909c30b4aa3c0d30ac4b57.scope: Deactivated successfully.
Nov 25 20:41:01 compute-0 podman[266689]: 2025-11-25 20:41:01.611013977 +0000 UTC m=+0.066558862 container create 8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:41:01 compute-0 systemd[1]: Started libpod-conmon-8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b.scope.
Nov 25 20:41:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1082: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:01 compute-0 podman[266689]: 2025-11-25 20:41:01.584474289 +0000 UTC m=+0.040019234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:41:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9712c81e1c4871545811bfd6c2944d2d7f4a0b0acf25f82f0f4fcd34dff55d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9712c81e1c4871545811bfd6c2944d2d7f4a0b0acf25f82f0f4fcd34dff55d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9712c81e1c4871545811bfd6c2944d2d7f4a0b0acf25f82f0f4fcd34dff55d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9712c81e1c4871545811bfd6c2944d2d7f4a0b0acf25f82f0f4fcd34dff55d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9712c81e1c4871545811bfd6c2944d2d7f4a0b0acf25f82f0f4fcd34dff55d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:01 compute-0 podman[266689]: 2025-11-25 20:41:01.707561189 +0000 UTC m=+0.163106134 container init 8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 20:41:01 compute-0 podman[266689]: 2025-11-25 20:41:01.722188305 +0000 UTC m=+0.177733190 container start 8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rhodes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:41:01 compute-0 podman[266689]: 2025-11-25 20:41:01.726654445 +0000 UTC m=+0.182199330 container attach 8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:41:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:41:02 compute-0 ceph-mon[75144]: pgmap v1082: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:02 compute-0 happy_rhodes[266705]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:41:02 compute-0 happy_rhodes[266705]: --> relative data size: 1.0
Nov 25 20:41:02 compute-0 happy_rhodes[266705]: --> All data devices are unavailable
Nov 25 20:41:02 compute-0 systemd[1]: libpod-8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b.scope: Deactivated successfully.
Nov 25 20:41:02 compute-0 systemd[1]: libpod-8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b.scope: Consumed 1.082s CPU time.
Nov 25 20:41:02 compute-0 podman[266734]: 2025-11-25 20:41:02.903476153 +0000 UTC m=+0.042233814 container died 8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e9712c81e1c4871545811bfd6c2944d2d7f4a0b0acf25f82f0f4fcd34dff55d-merged.mount: Deactivated successfully.
Nov 25 20:41:02 compute-0 podman[266734]: 2025-11-25 20:41:02.963695542 +0000 UTC m=+0.102453193 container remove 8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:41:02 compute-0 systemd[1]: libpod-conmon-8bc90d82e9a7579959804bdd0a0d3e8321fad5fd89f2701bc9450b5f0b52956b.scope: Deactivated successfully.
Nov 25 20:41:03 compute-0 sudo[266582]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:03 compute-0 sudo[266749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:03 compute-0 sudo[266749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:03 compute-0 sudo[266749]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:03 compute-0 sudo[266774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:41:03 compute-0 sudo[266774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:03 compute-0 sudo[266774]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:03 compute-0 sudo[266799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:03 compute-0 sudo[266799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:03 compute-0 sudo[266799]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:03 compute-0 sudo[266824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:41:03 compute-0 sudo[266824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1083: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:03 compute-0 ceph-mon[75144]: pgmap v1083: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:03 compute-0 podman[266889]: 2025-11-25 20:41:03.865728055 +0000 UTC m=+0.062773629 container create 9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:41:03 compute-0 systemd[1]: Started libpod-conmon-9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f.scope.
Nov 25 20:41:03 compute-0 podman[266889]: 2025-11-25 20:41:03.830273835 +0000 UTC m=+0.027319479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:41:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:41:03 compute-0 podman[266889]: 2025-11-25 20:41:03.964376673 +0000 UTC m=+0.161422287 container init 9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:41:03 compute-0 podman[266889]: 2025-11-25 20:41:03.975271948 +0000 UTC m=+0.172317522 container start 9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:41:03 compute-0 podman[266889]: 2025-11-25 20:41:03.979592665 +0000 UTC m=+0.176638339 container attach 9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:41:03 compute-0 nostalgic_moore[266905]: 167 167
Nov 25 20:41:03 compute-0 systemd[1]: libpod-9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f.scope: Deactivated successfully.
Nov 25 20:41:03 compute-0 podman[266889]: 2025-11-25 20:41:03.984113537 +0000 UTC m=+0.181159111 container died 9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:41:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-30eb122fc2749623f1a39e50ee9d54392f3dce2404d5ddbd665534c5d207aab3-merged.mount: Deactivated successfully.
Nov 25 20:41:04 compute-0 podman[266889]: 2025-11-25 20:41:04.0341296 +0000 UTC m=+0.231175174 container remove 9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:41:04 compute-0 systemd[1]: libpod-conmon-9228de2f5afc247739ff9a9b0aafaf98742d7b0168e22ed90f1d0fa90b02533f.scope: Deactivated successfully.
Nov 25 20:41:04 compute-0 podman[266923]: 2025-11-25 20:41:04.208417105 +0000 UTC m=+0.119250796 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 20:41:04 compute-0 podman[266954]: 2025-11-25 20:41:04.290720442 +0000 UTC m=+0.073827879 container create 5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kapitsa, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:41:04 compute-0 podman[266954]: 2025-11-25 20:41:04.261191983 +0000 UTC m=+0.044299460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:41:04 compute-0 systemd[1]: Started libpod-conmon-5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2.scope.
Nov 25 20:41:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33872bbe22dac8d3e7c4ec9ea019daffcc960063edf8d295a845aae7783e6b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33872bbe22dac8d3e7c4ec9ea019daffcc960063edf8d295a845aae7783e6b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33872bbe22dac8d3e7c4ec9ea019daffcc960063edf8d295a845aae7783e6b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33872bbe22dac8d3e7c4ec9ea019daffcc960063edf8d295a845aae7783e6b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:04 compute-0 podman[266954]: 2025-11-25 20:41:04.416729421 +0000 UTC m=+0.199836848 container init 5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:41:04 compute-0 podman[266954]: 2025-11-25 20:41:04.433593347 +0000 UTC m=+0.216700774 container start 5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:41:04 compute-0 podman[266954]: 2025-11-25 20:41:04.437743829 +0000 UTC m=+0.220851266 container attach 5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]: {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:     "0": [
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:         {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "devices": [
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "/dev/loop3"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             ],
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_name": "ceph_lv0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_size": "21470642176",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "name": "ceph_lv0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "tags": {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cluster_name": "ceph",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.crush_device_class": "",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.encrypted": "0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osd_id": "0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.type": "block",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.vdo": "0"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             },
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "type": "block",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "vg_name": "ceph_vg0"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:         }
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:     ],
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:     "1": [
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:         {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "devices": [
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "/dev/loop4"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             ],
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_name": "ceph_lv1",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_size": "21470642176",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "name": "ceph_lv1",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "tags": {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cluster_name": "ceph",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.crush_device_class": "",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.encrypted": "0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osd_id": "1",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.type": "block",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.vdo": "0"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             },
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "type": "block",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "vg_name": "ceph_vg1"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:         }
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:     ],
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:     "2": [
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:         {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "devices": [
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "/dev/loop5"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             ],
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_name": "ceph_lv2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_size": "21470642176",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "name": "ceph_lv2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "tags": {
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.cluster_name": "ceph",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.crush_device_class": "",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.encrypted": "0",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osd_id": "2",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.type": "block",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:                 "ceph.vdo": "0"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             },
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "type": "block",
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:             "vg_name": "ceph_vg2"
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:         }
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]:     ]
Nov 25 20:41:05 compute-0 clever_kapitsa[266971]: }
Nov 25 20:41:05 compute-0 systemd[1]: libpod-5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2.scope: Deactivated successfully.
Nov 25 20:41:05 compute-0 podman[266954]: 2025-11-25 20:41:05.24098767 +0000 UTC m=+1.024095097 container died 5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 20:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a33872bbe22dac8d3e7c4ec9ea019daffcc960063edf8d295a845aae7783e6b3-merged.mount: Deactivated successfully.
Nov 25 20:41:05 compute-0 podman[266954]: 2025-11-25 20:41:05.318874237 +0000 UTC m=+1.101981644 container remove 5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:41:05 compute-0 systemd[1]: libpod-conmon-5b503e826db32b8ca6d4cd206784bbb0fad092d4f3261770e91b521f272317c2.scope: Deactivated successfully.
Nov 25 20:41:05 compute-0 sudo[266824]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:05 compute-0 sudo[266994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:05 compute-0 sudo[266994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:05 compute-0 sudo[266994]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:05 compute-0 sudo[267019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:41:05 compute-0 sudo[267019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:05 compute-0 sudo[267019]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:05 compute-0 sudo[267044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1084: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:05 compute-0 sudo[267044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:05 compute-0 sudo[267044]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:05 compute-0 sudo[267069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:41:05 compute-0 sudo[267069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:05 compute-0 ceph-mon[75144]: pgmap v1084: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.228526716 +0000 UTC m=+0.065420961 container create 5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:41:06 compute-0 systemd[1]: Started libpod-conmon-5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c.scope.
Nov 25 20:41:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.202473391 +0000 UTC m=+0.039367716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.39651205 +0000 UTC m=+0.233406335 container init 5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.404587228 +0000 UTC m=+0.241481473 container start 5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meitner, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:41:06 compute-0 quizzical_meitner[267149]: 167 167
Nov 25 20:41:06 compute-0 systemd[1]: libpod-5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c.scope: Deactivated successfully.
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.434102297 +0000 UTC m=+0.270996622 container attach 5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.435439833 +0000 UTC m=+0.272334078 container died 5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meitner, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:41:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-62a93cc147e07007a0f10c6a26f69234504fe159e68fde960133aa966bf2a7b4-merged.mount: Deactivated successfully.
Nov 25 20:41:06 compute-0 podman[267133]: 2025-11-25 20:41:06.734633488 +0000 UTC m=+0.571527773 container remove 5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:41:06 compute-0 systemd[1]: libpod-conmon-5353050c2ded285c370203d2345a09fc30f3edb7bb3edb646164df27f236889c.scope: Deactivated successfully.
Nov 25 20:41:07 compute-0 podman[267173]: 2025-11-25 20:41:07.01969106 +0000 UTC m=+0.086239194 container create 53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:41:07 compute-0 systemd[1]: Started libpod-conmon-53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19.scope.
Nov 25 20:41:07 compute-0 podman[267173]: 2025-11-25 20:41:06.984054446 +0000 UTC m=+0.050602660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:41:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c305e0600a07c2f7839a2d35e8b5d888bbc3f15e8078544d4b3a48516ce9ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c305e0600a07c2f7839a2d35e8b5d888bbc3f15e8078544d4b3a48516ce9ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c305e0600a07c2f7839a2d35e8b5d888bbc3f15e8078544d4b3a48516ce9ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c305e0600a07c2f7839a2d35e8b5d888bbc3f15e8078544d4b3a48516ce9ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:41:07 compute-0 podman[267173]: 2025-11-25 20:41:07.119192261 +0000 UTC m=+0.185740365 container init 53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:41:07 compute-0 podman[267173]: 2025-11-25 20:41:07.134613589 +0000 UTC m=+0.201161693 container start 53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:41:07 compute-0 podman[267173]: 2025-11-25 20:41:07.137905098 +0000 UTC m=+0.204453202 container attach 53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:41:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1085: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:07 compute-0 ceph-mon[75144]: pgmap v1085: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:08 compute-0 friendly_bohr[267189]: {
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "osd_id": 2,
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "type": "bluestore"
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:     },
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "osd_id": 1,
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "type": "bluestore"
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:     },
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "osd_id": 0,
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:         "type": "bluestore"
Nov 25 20:41:08 compute-0 friendly_bohr[267189]:     }
Nov 25 20:41:08 compute-0 friendly_bohr[267189]: }
Nov 25 20:41:08 compute-0 systemd[1]: libpod-53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19.scope: Deactivated successfully.
Nov 25 20:41:08 compute-0 systemd[1]: libpod-53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19.scope: Consumed 1.138s CPU time.
Nov 25 20:41:08 compute-0 podman[267222]: 2025-11-25 20:41:08.335025994 +0000 UTC m=+0.045119962 container died 53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:41:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-39c305e0600a07c2f7839a2d35e8b5d888bbc3f15e8078544d4b3a48516ce9ef-merged.mount: Deactivated successfully.
Nov 25 20:41:08 compute-0 podman[267222]: 2025-11-25 20:41:08.40990297 +0000 UTC m=+0.119996938 container remove 53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:41:08 compute-0 systemd[1]: libpod-conmon-53681905d132c78094d0b381c413c46785f85a7afd2a0b007238042d0cc49f19.scope: Deactivated successfully.
Nov 25 20:41:08 compute-0 sudo[267069]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:41:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:08 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:41:08 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:08 compute-0 sudo[267235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:41:08 compute-0 sudo[267235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:08 compute-0 sudo[267235]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:08 compute-0 sudo[267260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:41:08 compute-0 sudo[267260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:41:08 compute-0 sudo[267260]: pam_unix(sudo:session): session closed for user root
Nov 25 20:41:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:41:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1086: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:10 compute-0 ceph-mon[75144]: pgmap v1086: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1087: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:11 compute-0 ceph-mon[75144]: pgmap v1087: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1088: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:13 compute-0 ceph-mon[75144]: pgmap v1088: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1089: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:15 compute-0 ceph-mon[75144]: pgmap v1089: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:41:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904263842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:41:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:41:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904263842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:41:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1904263842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:41:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1904263842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:41:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1090: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:18 compute-0 ceph-mon[75144]: pgmap v1090: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1091: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:19 compute-0 ceph-mon[75144]: pgmap v1091: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1092: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:21 compute-0 ceph-mon[75144]: pgmap v1092: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1093: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:23 compute-0 ceph-mon[75144]: pgmap v1093: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1094: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:25 compute-0 ceph-mon[75144]: pgmap v1094: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:26 compute-0 podman[267286]: 2025-11-25 20:41:26.001989014 +0000 UTC m=+0.091499827 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 20:41:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:41:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:41:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:41:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:41:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:41:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:41:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1095: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:27 compute-0 ceph-mon[75144]: pgmap v1095: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:27 compute-0 podman[267306]: 2025-11-25 20:41:27.988222049 +0000 UTC m=+0.079489561 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:41:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1096: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:29 compute-0 ceph-mon[75144]: pgmap v1096: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1097: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:31 compute-0 ceph-mon[75144]: pgmap v1097: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1098: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:33 compute-0 ceph-mon[75144]: pgmap v1098: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:35 compute-0 podman[267326]: 2025-11-25 20:41:35.034184815 +0000 UTC m=+0.129732340 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:41:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1099: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:35 compute-0 ceph-mon[75144]: pgmap v1099: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:37 compute-0 ceph-mon[75144]: pgmap v1100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:39 compute-0 ceph-mon[75144]: pgmap v1101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.483914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103300483947, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 627, "num_deletes": 256, "total_data_size": 476436, "memory_usage": 488152, "flush_reason": "Manual Compaction"}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103300489591, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 461557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22393, "largest_seqno": 23019, "table_properties": {"data_size": 458248, "index_size": 1217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7300, "raw_average_key_size": 18, "raw_value_size": 451574, "raw_average_value_size": 1117, "num_data_blocks": 54, "num_entries": 404, "num_filter_entries": 404, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103257, "oldest_key_time": 1764103257, "file_creation_time": 1764103300, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 5710 microseconds, and 2160 cpu microseconds.
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.489624) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 461557 bytes OK
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.489642) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.491516) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.491533) EVENT_LOG_v1 {"time_micros": 1764103300491527, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.491552) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 473039, prev total WAL file size 473039, number of live WAL files 2.
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.492003) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(450KB)], [53(4417KB)]
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103300492039, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 4984691, "oldest_snapshot_seqno": -1}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 3782 keys, 4890911 bytes, temperature: kUnknown
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103300523956, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 4890911, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4865918, "index_size": 14452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92785, "raw_average_key_size": 24, "raw_value_size": 4798199, "raw_average_value_size": 1268, "num_data_blocks": 613, "num_entries": 3782, "num_filter_entries": 3782, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103300, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.524222) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 4890911 bytes
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.525873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.8 rd, 152.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 4.3 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(21.4) write-amplify(10.6) OK, records in: 4305, records dropped: 523 output_compression: NoCompression
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.525898) EVENT_LOG_v1 {"time_micros": 1764103300525885, "job": 28, "event": "compaction_finished", "compaction_time_micros": 31997, "compaction_time_cpu_micros": 22176, "output_level": 6, "num_output_files": 1, "total_output_size": 4890911, "num_input_records": 4305, "num_output_records": 3782, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103300526164, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103300527401, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.491935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.527449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.527456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.527458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.527460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:41:40 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:41:40.527462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:41:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:41 compute-0 ceph-mon[75144]: pgmap v1102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:42 compute-0 nova_compute[248866]: 2025-11-25 20:41:42.050 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:43 compute-0 nova_compute[248866]: 2025-11-25 20:41:43.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:43 compute-0 ceph-mon[75144]: pgmap v1103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:45 compute-0 nova_compute[248866]: 2025-11-25 20:41:45.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:45 compute-0 ceph-mon[75144]: pgmap v1104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:46 compute-0 nova_compute[248866]: 2025-11-25 20:41:46.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.079 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.079 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.080 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.080 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.080 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:41:47 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:41:47 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643839382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.535 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:41:47 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2643839382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:41:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.758 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.759 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5302MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.760 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:41:47 compute-0 nova_compute[248866]: 2025-11-25 20:41:47.760 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.043 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.043 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.168 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing inventories for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.259 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating ProviderTree inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.259 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.278 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing aggregate associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.322 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing trait associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, traits: HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.342 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:41:48 compute-0 ceph-mon[75144]: pgmap v1105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:48 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:41:48 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951435909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.804 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.811 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.834 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.836 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:41:48 compute-0 nova_compute[248866]: 2025-11-25 20:41:48.837 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:41:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:41:48.957 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:41:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:41:48.958 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:41:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:41:48.958 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:41:49 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2951435909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:41:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:49 compute-0 nova_compute[248866]: 2025-11-25 20:41:49.837 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:50 compute-0 nova_compute[248866]: 2025-11-25 20:41:50.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:50 compute-0 ceph-mon[75144]: pgmap v1106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:51 compute-0 nova_compute[248866]: 2025-11-25 20:41:51.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:51 compute-0 nova_compute[248866]: 2025-11-25 20:41:51.055 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:41:51 compute-0 nova_compute[248866]: 2025-11-25 20:41:51.055 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:41:51 compute-0 nova_compute[248866]: 2025-11-25 20:41:51.055 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:41:51 compute-0 nova_compute[248866]: 2025-11-25 20:41:51.065 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:41:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:51 compute-0 ceph-mon[75144]: pgmap v1107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:53 compute-0 ceph-mon[75144]: pgmap v1108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:41:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:55 compute-0 ceph-mon[75144]: pgmap v1109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:41:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:41:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:41:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:41:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:41:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:41:56 compute-0 podman[267396]: 2025-11-25 20:41:56.988374849 +0000 UTC m=+0.085937006 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:41:57
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'images']
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:41:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:57 compute-0 ceph-mon[75144]: pgmap v1110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:58 compute-0 podman[267415]: 2025-11-25 20:41:58.990916886 +0000 UTC m=+0.084451256 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:41:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:41:59 compute-0 ceph-mon[75144]: pgmap v1111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:01 compute-0 ceph-mon[75144]: pgmap v1112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:02 compute-0 anacron[51839]: Job `cron.weekly' started
Nov 25 20:42:02 compute-0 anacron[51839]: Job `cron.weekly' terminated
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:42:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:42:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:03 compute-0 ceph-mon[75144]: pgmap v1113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:05 compute-0 ceph-mon[75144]: pgmap v1114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:06 compute-0 podman[267437]: 2025-11-25 20:42:06.052296361 +0000 UTC m=+0.140677098 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 20:42:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1115: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:07 compute-0 ceph-mon[75144]: pgmap v1115: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:08 compute-0 sudo[267461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:08 compute-0 sudo[267461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:08 compute-0 sudo[267461]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:08 compute-0 sudo[267486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:42:08 compute-0 sudo[267486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:08 compute-0 sudo[267486]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:09 compute-0 sudo[267511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:09 compute-0 sudo[267511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:09 compute-0 sudo[267511]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:09 compute-0 sudo[267536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:42:09 compute-0 sudo[267536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1116: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:09 compute-0 sudo[267536]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:09 compute-0 ceph-mon[75144]: pgmap v1116: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:42:09 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:42:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:42:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:42:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:42:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:42:09 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev fe097eda-f333-48b3-bfa0-63a571989cfb does not exist
Nov 25 20:42:09 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev e86f0868-c4ef-4f38-bb02-8d0235b28ca5 does not exist
Nov 25 20:42:09 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 14d93fbb-4415-47b6-b17e-5ba4d31a22b0 does not exist
Nov 25 20:42:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:42:09 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:42:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:42:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:42:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:42:09 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:42:09 compute-0 sudo[267593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:09 compute-0 sudo[267593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:09 compute-0 sudo[267593]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:10 compute-0 sudo[267618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:42:10 compute-0 sudo[267618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:10 compute-0 sudo[267618]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:10 compute-0 sudo[267643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:10 compute-0 sudo[267643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:10 compute-0 sudo[267643]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:10 compute-0 sudo[267668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:42:10 compute-0 sudo[267668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.702388691 +0000 UTC m=+0.061241943 container create 132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:42:10 compute-0 systemd[1]: Started libpod-conmon-132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490.scope.
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.672486125 +0000 UTC m=+0.031339427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:42:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:42:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:42:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:42:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:42:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:42:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:42:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.822318837 +0000 UTC m=+0.181172129 container init 132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.83614654 +0000 UTC m=+0.194999792 container start 132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.841151495 +0000 UTC m=+0.200004737 container attach 132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:42:10 compute-0 stoic_lovelace[267748]: 167 167
Nov 25 20:42:10 compute-0 systemd[1]: libpod-132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490.scope: Deactivated successfully.
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.846369376 +0000 UTC m=+0.205222618 container died 132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 20:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d74b8e79ad85a7313fdd04cf0a2955efaa9dea1bb717ae9399f4799a15ef2fc-merged.mount: Deactivated successfully.
Nov 25 20:42:10 compute-0 podman[267732]: 2025-11-25 20:42:10.898381559 +0000 UTC m=+0.257234811 container remove 132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:42:10 compute-0 systemd[1]: libpod-conmon-132aa56d1f012dc0fb8f45d92c87e2ab6cab32441c21ca8b04b44ec04757f490.scope: Deactivated successfully.
Nov 25 20:42:11 compute-0 podman[267773]: 2025-11-25 20:42:11.144399557 +0000 UTC m=+0.069239349 container create 85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:42:11 compute-0 systemd[1]: Started libpod-conmon-85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf.scope.
Nov 25 20:42:11 compute-0 podman[267773]: 2025-11-25 20:42:11.115155068 +0000 UTC m=+0.039994880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:42:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06eaabccff604af7f9421ac1de2ed05a872c3ae5b1c52ee89e3874da18fc7eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06eaabccff604af7f9421ac1de2ed05a872c3ae5b1c52ee89e3874da18fc7eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06eaabccff604af7f9421ac1de2ed05a872c3ae5b1c52ee89e3874da18fc7eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06eaabccff604af7f9421ac1de2ed05a872c3ae5b1c52ee89e3874da18fc7eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06eaabccff604af7f9421ac1de2ed05a872c3ae5b1c52ee89e3874da18fc7eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:11 compute-0 podman[267773]: 2025-11-25 20:42:11.262584625 +0000 UTC m=+0.187424417 container init 85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:42:11 compute-0 podman[267773]: 2025-11-25 20:42:11.280306894 +0000 UTC m=+0.205146686 container start 85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:42:11 compute-0 podman[267773]: 2025-11-25 20:42:11.285870524 +0000 UTC m=+0.210710326 container attach 85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:42:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1117: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:11 compute-0 ceph-mon[75144]: pgmap v1117: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:12 compute-0 sad_lewin[267789]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:42:12 compute-0 sad_lewin[267789]: --> relative data size: 1.0
Nov 25 20:42:12 compute-0 sad_lewin[267789]: --> All data devices are unavailable
Nov 25 20:42:12 compute-0 systemd[1]: libpod-85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf.scope: Deactivated successfully.
Nov 25 20:42:12 compute-0 systemd[1]: libpod-85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf.scope: Consumed 1.040s CPU time.
Nov 25 20:42:12 compute-0 podman[267773]: 2025-11-25 20:42:12.380780916 +0000 UTC m=+1.305620718 container died 85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c06eaabccff604af7f9421ac1de2ed05a872c3ae5b1c52ee89e3874da18fc7eb-merged.mount: Deactivated successfully.
Nov 25 20:42:12 compute-0 podman[267773]: 2025-11-25 20:42:12.458693467 +0000 UTC m=+1.383533229 container remove 85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 20:42:12 compute-0 systemd[1]: libpod-conmon-85c1e11e82b6a6a05ff6a3cb7dd4000c943bf993fc182140dd924d943e0393bf.scope: Deactivated successfully.
Nov 25 20:42:12 compute-0 sudo[267668]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:12 compute-0 sudo[267831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:12 compute-0 sudo[267831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:12 compute-0 sudo[267831]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:12 compute-0 sudo[267856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:42:12 compute-0 sudo[267856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:12 compute-0 sudo[267856]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:12 compute-0 sudo[267881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:12 compute-0 sudo[267881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:12 compute-0 sudo[267881]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:12 compute-0 sudo[267906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:42:12 compute-0 sudo[267906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.319546034 +0000 UTC m=+0.070421741 container create 708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:42:13 compute-0 systemd[1]: Started libpod-conmon-708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705.scope.
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.294169799 +0000 UTC m=+0.045045556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:42:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.416465428 +0000 UTC m=+0.167341155 container init 708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.427778853 +0000 UTC m=+0.178654550 container start 708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.43132814 +0000 UTC m=+0.182203897 container attach 708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:42:13 compute-0 ecstatic_goldstine[267988]: 167 167
Nov 25 20:42:13 compute-0 systemd[1]: libpod-708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705.scope: Deactivated successfully.
Nov 25 20:42:13 compute-0 conmon[267988]: conmon 708799f1cb0236f3796f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705.scope/container/memory.events
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.438204645 +0000 UTC m=+0.189080382 container died 708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:42:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b80fe9b28762e2e7fd057ab2596d6d55785df4620143c83b63be65009c6030fa-merged.mount: Deactivated successfully.
Nov 25 20:42:13 compute-0 podman[267972]: 2025-11-25 20:42:13.483438285 +0000 UTC m=+0.234313992 container remove 708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:42:13 compute-0 systemd[1]: libpod-conmon-708799f1cb0236f3796fe471808b59007c28e7385f5841613210fe1c3cb8e705.scope: Deactivated successfully.
Nov 25 20:42:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1118: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:13 compute-0 podman[268012]: 2025-11-25 20:42:13.702935918 +0000 UTC m=+0.060236377 container create e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:42:13 compute-0 systemd[1]: Started libpod-conmon-e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939.scope.
Nov 25 20:42:13 compute-0 podman[268012]: 2025-11-25 20:42:13.683849943 +0000 UTC m=+0.041150392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:42:13 compute-0 ceph-mon[75144]: pgmap v1118: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:42:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cc3fb5cb9d453b3a526e5977cc96cb154f9aa9e12166b4dc15f3612ce60df8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cc3fb5cb9d453b3a526e5977cc96cb154f9aa9e12166b4dc15f3612ce60df8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cc3fb5cb9d453b3a526e5977cc96cb154f9aa9e12166b4dc15f3612ce60df8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cc3fb5cb9d453b3a526e5977cc96cb154f9aa9e12166b4dc15f3612ce60df8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:13 compute-0 podman[268012]: 2025-11-25 20:42:13.806414259 +0000 UTC m=+0.163714748 container init e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:42:13 compute-0 podman[268012]: 2025-11-25 20:42:13.823516041 +0000 UTC m=+0.180816460 container start e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:42:13 compute-0 podman[268012]: 2025-11-25 20:42:13.827507608 +0000 UTC m=+0.184808117 container attach e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]: {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:     "0": [
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:         {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "devices": [
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "/dev/loop3"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             ],
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_name": "ceph_lv0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_size": "21470642176",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "name": "ceph_lv0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "tags": {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cluster_name": "ceph",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.crush_device_class": "",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.encrypted": "0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osd_id": "0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.type": "block",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.vdo": "0"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             },
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "type": "block",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "vg_name": "ceph_vg0"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:         }
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:     ],
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:     "1": [
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:         {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "devices": [
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "/dev/loop4"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             ],
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_name": "ceph_lv1",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_size": "21470642176",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "name": "ceph_lv1",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "tags": {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cluster_name": "ceph",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.crush_device_class": "",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.encrypted": "0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osd_id": "1",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.type": "block",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.vdo": "0"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             },
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "type": "block",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "vg_name": "ceph_vg1"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:         }
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:     ],
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:     "2": [
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:         {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "devices": [
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "/dev/loop5"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             ],
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_name": "ceph_lv2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_size": "21470642176",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "name": "ceph_lv2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "tags": {
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.cluster_name": "ceph",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.crush_device_class": "",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.encrypted": "0",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osd_id": "2",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.type": "block",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:                 "ceph.vdo": "0"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             },
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "type": "block",
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:             "vg_name": "ceph_vg2"
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:         }
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]:     ]
Nov 25 20:42:14 compute-0 crazy_khayyam[268028]: }
Nov 25 20:42:14 compute-0 systemd[1]: libpod-e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939.scope: Deactivated successfully.
Nov 25 20:42:14 compute-0 podman[268012]: 2025-11-25 20:42:14.590287098 +0000 UTC m=+0.947587547 container died e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:42:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6cc3fb5cb9d453b3a526e5977cc96cb154f9aa9e12166b4dc15f3612ce60df8-merged.mount: Deactivated successfully.
Nov 25 20:42:14 compute-0 podman[268012]: 2025-11-25 20:42:14.700362889 +0000 UTC m=+1.057663348 container remove e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:42:14 compute-0 systemd[1]: libpod-conmon-e91787314634787d1053a2a76cdd729f967f3bbd0c56695c81f640be31501939.scope: Deactivated successfully.
Nov 25 20:42:14 compute-0 sudo[267906]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:14 compute-0 sudo[268050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:14 compute-0 sudo[268050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:14 compute-0 sudo[268050]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:14 compute-0 sudo[268075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:42:14 compute-0 sudo[268075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:14 compute-0 sudo[268075]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:15 compute-0 sudo[268100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:15 compute-0 sudo[268100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:15 compute-0 sudo[268100]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:15 compute-0 sudo[268125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:42:15 compute-0 sudo[268125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.640335759 +0000 UTC m=+0.063902395 container create 453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:42:15 compute-0 systemd[1]: Started libpod-conmon-453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f.scope.
Nov 25 20:42:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1119: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.612886588 +0000 UTC m=+0.036453214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:42:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.731062598 +0000 UTC m=+0.154629214 container init 453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.743822901 +0000 UTC m=+0.167389507 container start 453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_buck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.748929319 +0000 UTC m=+0.172495975 container attach 453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:42:15 compute-0 inspiring_buck[268206]: 167 167
Nov 25 20:42:15 compute-0 systemd[1]: libpod-453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f.scope: Deactivated successfully.
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.75228051 +0000 UTC m=+0.175847116 container died 453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-268f40310104226a952965bc0b0facf4154a980c902a071d7f741fb67464cd65-merged.mount: Deactivated successfully.
Nov 25 20:42:15 compute-0 ceph-mon[75144]: pgmap v1119: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:15 compute-0 podman[268190]: 2025-11-25 20:42:15.789388711 +0000 UTC m=+0.212955307 container remove 453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_buck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:42:15 compute-0 systemd[1]: libpod-conmon-453c8ea19ad5fddf9a63274540bdef1e44ad7b7cd600c17032751014a2c3ed0f.scope: Deactivated successfully.
Nov 25 20:42:15 compute-0 podman[268229]: 2025-11-25 20:42:15.982417469 +0000 UTC m=+0.059571488 container create ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:42:16 compute-0 systemd[1]: Started libpod-conmon-ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89.scope.
Nov 25 20:42:16 compute-0 podman[268229]: 2025-11-25 20:42:15.953506129 +0000 UTC m=+0.030660198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:42:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acd2d2483eefd9c4fe55964c0e3491a62beb96d25480dd70f68b00c03dbc649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acd2d2483eefd9c4fe55964c0e3491a62beb96d25480dd70f68b00c03dbc649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acd2d2483eefd9c4fe55964c0e3491a62beb96d25480dd70f68b00c03dbc649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acd2d2483eefd9c4fe55964c0e3491a62beb96d25480dd70f68b00c03dbc649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:42:16 compute-0 podman[268229]: 2025-11-25 20:42:16.091587624 +0000 UTC m=+0.168741683 container init ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:42:16 compute-0 podman[268229]: 2025-11-25 20:42:16.108356937 +0000 UTC m=+0.185510956 container start ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:42:16 compute-0 podman[268229]: 2025-11-25 20:42:16.113774623 +0000 UTC m=+0.190928612 container attach ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:42:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:42:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4187438837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:42:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:42:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4187438837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:42:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/4187438837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:42:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/4187438837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]: {
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "osd_id": 2,
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "type": "bluestore"
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:     },
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "osd_id": 1,
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "type": "bluestore"
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:     },
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "osd_id": 0,
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:         "type": "bluestore"
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]:     }
Nov 25 20:42:17 compute-0 adoring_mirzakhani[268246]: }
Nov 25 20:42:17 compute-0 systemd[1]: libpod-ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89.scope: Deactivated successfully.
Nov 25 20:42:17 compute-0 systemd[1]: libpod-ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89.scope: Consumed 1.147s CPU time.
Nov 25 20:42:17 compute-0 podman[268229]: 2025-11-25 20:42:17.279508255 +0000 UTC m=+1.356662264 container died ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1acd2d2483eefd9c4fe55964c0e3491a62beb96d25480dd70f68b00c03dbc649-merged.mount: Deactivated successfully.
Nov 25 20:42:17 compute-0 podman[268229]: 2025-11-25 20:42:17.351133348 +0000 UTC m=+1.428287367 container remove ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:42:17 compute-0 systemd[1]: libpod-conmon-ff5bc0aeb61123acffe529cc65f76f43c5acfba3b46abfe28f5c5f885cb72d89.scope: Deactivated successfully.
Nov 25 20:42:17 compute-0 sudo[268125]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:42:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:42:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:42:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:42:17 compute-0 sudo[268294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:42:17 compute-0 sudo[268294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:17 compute-0 sudo[268294]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:17 compute-0 sudo[268319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:42:17 compute-0 sudo[268319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:42:17 compute-0 sudo[268319]: pam_unix(sudo:session): session closed for user root
Nov 25 20:42:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1120: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:18 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:42:18 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:42:18 compute-0 ceph-mon[75144]: pgmap v1120: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1121: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:19 compute-0 ceph-mon[75144]: pgmap v1121: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1122: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:21 compute-0 ceph-mon[75144]: pgmap v1122: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1123: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:23 compute-0 ceph-mon[75144]: pgmap v1123: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1124: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:25 compute-0 ceph-mon[75144]: pgmap v1124: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:42:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:42:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:42:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:42:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:42:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:42:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1125: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:27 compute-0 ceph-mon[75144]: pgmap v1125: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:27 compute-0 podman[268344]: 2025-11-25 20:42:27.978689354 +0000 UTC m=+0.072174559 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:42:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1126: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:29 compute-0 ceph-mon[75144]: pgmap v1126: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:30 compute-0 podman[268364]: 2025-11-25 20:42:30.010658187 +0000 UTC m=+0.090332668 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 20:42:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1127: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:31 compute-0 ceph-mon[75144]: pgmap v1127: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1128: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:33 compute-0 ceph-mon[75144]: pgmap v1128: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1129: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:35 compute-0 ceph-mon[75144]: pgmap v1129: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:37 compute-0 podman[268384]: 2025-11-25 20:42:37.044594244 +0000 UTC m=+0.131919260 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 20:42:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1130: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:37 compute-0 ceph-mon[75144]: pgmap v1130: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1131: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:39 compute-0 ceph-mon[75144]: pgmap v1131: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1132: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:41 compute-0 ceph-mon[75144]: pgmap v1132: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:43 compute-0 nova_compute[248866]: 2025-11-25 20:42:43.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:43 compute-0 nova_compute[248866]: 2025-11-25 20:42:43.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1133: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:43 compute-0 ceph-mon[75144]: pgmap v1133: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:45 compute-0 nova_compute[248866]: 2025-11-25 20:42:45.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1134: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:45 compute-0 ceph-mon[75144]: pgmap v1134: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:47 compute-0 nova_compute[248866]: 2025-11-25 20:42:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:47 compute-0 nova_compute[248866]: 2025-11-25 20:42:47.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:47 compute-0 nova_compute[248866]: 2025-11-25 20:42:47.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:42:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:47 compute-0 ceph-mon[75144]: pgmap v1135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:42:48.958 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:42:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:42:48.959 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:42:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:42:48.959 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.087 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.087 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.088 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.088 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.089 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:42:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:42:49 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118880162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.624 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:42:49 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1118880162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:42:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1136: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.876 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.878 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5300MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.878 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.879 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.969 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:42:49 compute-0 nova_compute[248866]: 2025-11-25 20:42:49.970 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:42:50 compute-0 nova_compute[248866]: 2025-11-25 20:42:50.006 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:42:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:42:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339531697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:42:50 compute-0 nova_compute[248866]: 2025-11-25 20:42:50.466 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:42:50 compute-0 nova_compute[248866]: 2025-11-25 20:42:50.475 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:42:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:50 compute-0 nova_compute[248866]: 2025-11-25 20:42:50.502 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:42:50 compute-0 nova_compute[248866]: 2025-11-25 20:42:50.505 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:42:50 compute-0 nova_compute[248866]: 2025-11-25 20:42:50.506 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:42:50 compute-0 ceph-mon[75144]: pgmap v1136: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:50 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2339531697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:42:51 compute-0 nova_compute[248866]: 2025-11-25 20:42:51.508 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:51 compute-0 nova_compute[248866]: 2025-11-25 20:42:51.509 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:51 compute-0 ceph-mon[75144]: pgmap v1137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:52 compute-0 nova_compute[248866]: 2025-11-25 20:42:52.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:42:52 compute-0 nova_compute[248866]: 2025-11-25 20:42:52.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:42:52 compute-0 nova_compute[248866]: 2025-11-25 20:42:52.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:42:52 compute-0 nova_compute[248866]: 2025-11-25 20:42:52.061 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:42:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:53 compute-0 ceph-mon[75144]: pgmap v1138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:42:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:55 compute-0 ceph-mon[75144]: pgmap v1139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:42:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:42:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:42:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:42:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:42:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:42:57
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes']
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:42:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1140: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:57 compute-0 ceph-mon[75144]: pgmap v1140: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:58 compute-0 podman[268454]: 2025-11-25 20:42:58.945659776 +0000 UTC m=+0.051991544 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 20:42:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1141: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:42:59 compute-0 ceph-mon[75144]: pgmap v1141: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:00 compute-0 podman[268475]: 2025-11-25 20:43:00.985116551 +0000 UTC m=+0.080404250 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 20:43:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1142: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:01 compute-0 ceph-mon[75144]: pgmap v1142: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:43:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:43:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1143: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:03 compute-0 ceph-mon[75144]: pgmap v1143: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1144: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:05 compute-0 ceph-mon[75144]: pgmap v1144: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1145: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:07 compute-0 ceph-mon[75144]: pgmap v1145: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:08 compute-0 podman[268496]: 2025-11-25 20:43:08.031034893 +0000 UTC m=+0.123950465 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:43:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1146: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:09 compute-0 ceph-mon[75144]: pgmap v1146: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1147: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:11 compute-0 ceph-mon[75144]: pgmap v1147: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1148: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:13 compute-0 ceph-mon[75144]: pgmap v1148: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1149: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:15 compute-0 ceph-mon[75144]: pgmap v1149: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:43:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2037892643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:43:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:43:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2037892643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:43:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2037892643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:43:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2037892643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:43:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1150: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:17 compute-0 sudo[268524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:17 compute-0 sudo[268524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:17 compute-0 sudo[268524]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:17 compute-0 sudo[268549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:43:17 compute-0 sudo[268549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:17 compute-0 sudo[268549]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:17 compute-0 sudo[268574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:17 compute-0 sudo[268574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:17 compute-0 sudo[268574]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:17 compute-0 sudo[268599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:43:17 compute-0 sudo[268599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:18 compute-0 ceph-mon[75144]: pgmap v1150: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:18 compute-0 sudo[268599]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:43:18 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:43:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:43:18 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:43:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:43:18 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:43:18 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 68f884f9-a5fe-496b-9897-5d9cfd8503d2 does not exist
Nov 25 20:43:18 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev fff8238d-415e-43e9-a22a-901f879c3b0c does not exist
Nov 25 20:43:18 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev bceedb64-bdbf-4bd2-8de1-5963d18324e6 does not exist
Nov 25 20:43:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:43:18 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:43:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:43:18 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:43:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:43:18 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:43:18 compute-0 sudo[268654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:18 compute-0 sudo[268654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:18 compute-0 sudo[268654]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:18 compute-0 sudo[268679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:43:18 compute-0 sudo[268679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:18 compute-0 sudo[268679]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:18 compute-0 sudo[268704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:18 compute-0 sudo[268704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:18 compute-0 sudo[268704]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:18 compute-0 sudo[268729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:43:18 compute-0 sudo[268729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.119533546 +0000 UTC m=+0.038652494 container create fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:43:19 compute-0 systemd[1]: Started libpod-conmon-fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036.scope.
Nov 25 20:43:19 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:43:19 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:43:19 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:43:19 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:43:19 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:43:19 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:43:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.102599679 +0000 UTC m=+0.021718637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.207771987 +0000 UTC m=+0.126890955 container init fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.218213908 +0000 UTC m=+0.137332856 container start fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.221181558 +0000 UTC m=+0.140300526 container attach fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:43:19 compute-0 pedantic_knuth[268809]: 167 167
Nov 25 20:43:19 compute-0 systemd[1]: libpod-fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036.scope: Deactivated successfully.
Nov 25 20:43:19 compute-0 conmon[268809]: conmon fc923f69e44b1fd3d5e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036.scope/container/memory.events
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.228988359 +0000 UTC m=+0.148107317 container died fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-54b4b905b88c0d2b3016bee40420ba3f6cb0ee90be0d9cff2d74c3d45db76c21-merged.mount: Deactivated successfully.
Nov 25 20:43:19 compute-0 podman[268793]: 2025-11-25 20:43:19.270867729 +0000 UTC m=+0.189986687 container remove fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:43:19 compute-0 systemd[1]: libpod-conmon-fc923f69e44b1fd3d5e11cb1f531ca3173bebdc985546b94adeeae837bd4e036.scope: Deactivated successfully.
Nov 25 20:43:19 compute-0 podman[268833]: 2025-11-25 20:43:19.482556101 +0000 UTC m=+0.059251310 container create 5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hellman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:43:19 compute-0 systemd[1]: Started libpod-conmon-5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958.scope.
Nov 25 20:43:19 compute-0 podman[268833]: 2025-11-25 20:43:19.452435858 +0000 UTC m=+0.029131167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:43:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a4c9f38985230610eb41118e0bdc4ecfa2418bfd2ca051c84e01f6a6f6272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a4c9f38985230610eb41118e0bdc4ecfa2418bfd2ca051c84e01f6a6f6272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a4c9f38985230610eb41118e0bdc4ecfa2418bfd2ca051c84e01f6a6f6272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a4c9f38985230610eb41118e0bdc4ecfa2418bfd2ca051c84e01f6a6f6272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a4c9f38985230610eb41118e0bdc4ecfa2418bfd2ca051c84e01f6a6f6272/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:19 compute-0 podman[268833]: 2025-11-25 20:43:19.581165011 +0000 UTC m=+0.157860220 container init 5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hellman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:43:19 compute-0 podman[268833]: 2025-11-25 20:43:19.589341341 +0000 UTC m=+0.166036560 container start 5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hellman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:43:19 compute-0 podman[268833]: 2025-11-25 20:43:19.608685813 +0000 UTC m=+0.185381022 container attach 5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hellman, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 20:43:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1151: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:20 compute-0 ceph-mon[75144]: pgmap v1151: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:20 compute-0 cool_hellman[268850]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:43:20 compute-0 cool_hellman[268850]: --> relative data size: 1.0
Nov 25 20:43:20 compute-0 cool_hellman[268850]: --> All data devices are unavailable
Nov 25 20:43:20 compute-0 systemd[1]: libpod-5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958.scope: Deactivated successfully.
Nov 25 20:43:20 compute-0 systemd[1]: libpod-5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958.scope: Consumed 1.097s CPU time.
Nov 25 20:43:20 compute-0 podman[268833]: 2025-11-25 20:43:20.734971891 +0000 UTC m=+1.311667100 container died 5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 20:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c7a4c9f38985230610eb41118e0bdc4ecfa2418bfd2ca051c84e01f6a6f6272-merged.mount: Deactivated successfully.
Nov 25 20:43:20 compute-0 podman[268833]: 2025-11-25 20:43:20.803212822 +0000 UTC m=+1.379908021 container remove 5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:43:20 compute-0 systemd[1]: libpod-conmon-5ed0b3226fbfeb2baae1a95f16f0e3f60c77c83844534e66b910d133194ed958.scope: Deactivated successfully.
Nov 25 20:43:20 compute-0 sudo[268729]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:20 compute-0 sudo[268893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:20 compute-0 sudo[268893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:20 compute-0 sudo[268893]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:20 compute-0 sudo[268918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:43:21 compute-0 sudo[268918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:21 compute-0 sudo[268918]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:21 compute-0 sudo[268943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:21 compute-0 sudo[268943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:21 compute-0 sudo[268943]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:21 compute-0 sudo[268968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:43:21 compute-0 sudo[268968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.623654638 +0000 UTC m=+0.115403965 container create 9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.535372636 +0000 UTC m=+0.027121983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:43:21 compute-0 systemd[1]: Started libpod-conmon-9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838.scope.
Nov 25 20:43:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1152: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.758749573 +0000 UTC m=+0.250498950 container init 9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.771000454 +0000 UTC m=+0.262749781 container start 9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.777674234 +0000 UTC m=+0.269423611 container attach 9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:43:21 compute-0 awesome_mestorf[269050]: 167 167
Nov 25 20:43:21 compute-0 systemd[1]: libpod-9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838.scope: Deactivated successfully.
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.781461205 +0000 UTC m=+0.273210542 container died 9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 20:43:21 compute-0 ceph-mon[75144]: pgmap v1152: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6bd647b26acfe2edce1e501be3cc9e060af54da7d4ea8a82be1e3586c7c0306-merged.mount: Deactivated successfully.
Nov 25 20:43:21 compute-0 podman[269034]: 2025-11-25 20:43:21.841871036 +0000 UTC m=+0.333620333 container remove 9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 20:43:21 compute-0 systemd[1]: libpod-conmon-9c1e3d549c8463415b1ef0216a47c4faf9d5c45816b65010b4f96d079991a838.scope: Deactivated successfully.
Nov 25 20:43:22 compute-0 podman[269075]: 2025-11-25 20:43:22.060201566 +0000 UTC m=+0.071308706 container create 3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:43:22 compute-0 podman[269075]: 2025-11-25 20:43:22.020321621 +0000 UTC m=+0.031428820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:43:22 compute-0 systemd[1]: Started libpod-conmon-3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f.scope.
Nov 25 20:43:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b91762ce02cb6762b4ee539c5f702969f7a8281f39aa92dc7ac6898176ae74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b91762ce02cb6762b4ee539c5f702969f7a8281f39aa92dc7ac6898176ae74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b91762ce02cb6762b4ee539c5f702969f7a8281f39aa92dc7ac6898176ae74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b91762ce02cb6762b4ee539c5f702969f7a8281f39aa92dc7ac6898176ae74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:22 compute-0 podman[269075]: 2025-11-25 20:43:22.21305168 +0000 UTC m=+0.224158799 container init 3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:43:22 compute-0 podman[269075]: 2025-11-25 20:43:22.224416926 +0000 UTC m=+0.235524025 container start 3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:43:22 compute-0 podman[269075]: 2025-11-25 20:43:22.262431552 +0000 UTC m=+0.273538661 container attach 3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:43:23 compute-0 lucid_feynman[269091]: {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:     "0": [
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:         {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "devices": [
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "/dev/loop3"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             ],
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_name": "ceph_lv0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_size": "21470642176",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "name": "ceph_lv0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "tags": {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cluster_name": "ceph",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.crush_device_class": "",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.encrypted": "0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osd_id": "0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.type": "block",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.vdo": "0"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             },
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "type": "block",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "vg_name": "ceph_vg0"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:         }
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:     ],
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:     "1": [
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:         {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "devices": [
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "/dev/loop4"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             ],
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_name": "ceph_lv1",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_size": "21470642176",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "name": "ceph_lv1",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "tags": {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cluster_name": "ceph",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.crush_device_class": "",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.encrypted": "0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osd_id": "1",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.type": "block",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.vdo": "0"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             },
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "type": "block",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "vg_name": "ceph_vg1"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:         }
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:     ],
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:     "2": [
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:         {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "devices": [
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "/dev/loop5"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             ],
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_name": "ceph_lv2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_size": "21470642176",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "name": "ceph_lv2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "tags": {
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.cluster_name": "ceph",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.crush_device_class": "",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.encrypted": "0",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osd_id": "2",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.type": "block",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:                 "ceph.vdo": "0"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             },
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "type": "block",
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:             "vg_name": "ceph_vg2"
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:         }
Nov 25 20:43:23 compute-0 lucid_feynman[269091]:     ]
Nov 25 20:43:23 compute-0 lucid_feynman[269091]: }
Nov 25 20:43:23 compute-0 systemd[1]: libpod-3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f.scope: Deactivated successfully.
Nov 25 20:43:23 compute-0 podman[269075]: 2025-11-25 20:43:23.063844075 +0000 UTC m=+1.074951204 container died 3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_feynman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b91762ce02cb6762b4ee539c5f702969f7a8281f39aa92dc7ac6898176ae74-merged.mount: Deactivated successfully.
Nov 25 20:43:23 compute-0 podman[269075]: 2025-11-25 20:43:23.50730752 +0000 UTC m=+1.518414659 container remove 3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:43:23 compute-0 systemd[1]: libpod-conmon-3457e85f80eee2ec5251887709f2ac794dca8399795429cf86b9e1f89ed2cc2f.scope: Deactivated successfully.
Nov 25 20:43:23 compute-0 sudo[268968]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:23 compute-0 sudo[269114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:23 compute-0 sudo[269114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:23 compute-0 sudo[269114]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1153: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:23 compute-0 sudo[269139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:43:23 compute-0 sudo[269139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:23 compute-0 sudo[269139]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:23 compute-0 sudo[269164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:23 compute-0 sudo[269164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:23 compute-0 sudo[269164]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:23 compute-0 ceph-mon[75144]: pgmap v1153: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:23 compute-0 sudo[269189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:43:23 compute-0 sudo[269189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.279599057 +0000 UTC m=+0.042463957 container create 81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chandrasekhar, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:43:24 compute-0 systemd[1]: Started libpod-conmon-81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2.scope.
Nov 25 20:43:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.26343669 +0000 UTC m=+0.026301610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.372285557 +0000 UTC m=+0.135150537 container init 81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.38535248 +0000 UTC m=+0.148217380 container start 81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chandrasekhar, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.388736571 +0000 UTC m=+0.151601571 container attach 81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chandrasekhar, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:43:24 compute-0 systemd[1]: libpod-81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2.scope: Deactivated successfully.
Nov 25 20:43:24 compute-0 laughing_chandrasekhar[269271]: 167 167
Nov 25 20:43:24 compute-0 conmon[269271]: conmon 81c129844ce697651a4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2.scope/container/memory.events
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.394680551 +0000 UTC m=+0.157545451 container died 81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 20:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-827901be9b87de7e2970d799fb47a416f166e42df24b750fcdef48582a923b88-merged.mount: Deactivated successfully.
Nov 25 20:43:24 compute-0 podman[269255]: 2025-11-25 20:43:24.453339624 +0000 UTC m=+0.216204524 container remove 81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:43:24 compute-0 systemd[1]: libpod-conmon-81c129844ce697651a4e7c859b0bca049141b4c1e7d9f01b4a43479ff0f849a2.scope: Deactivated successfully.
Nov 25 20:43:24 compute-0 podman[269295]: 2025-11-25 20:43:24.693436392 +0000 UTC m=+0.058263633 container create 97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:43:24 compute-0 systemd[1]: Started libpod-conmon-97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78.scope.
Nov 25 20:43:24 compute-0 podman[269295]: 2025-11-25 20:43:24.674635005 +0000 UTC m=+0.039462286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:43:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c4043bfdd6bdb436c150da27dfb40ccd7d7069fe0e9e53b918062727df72ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c4043bfdd6bdb436c150da27dfb40ccd7d7069fe0e9e53b918062727df72ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c4043bfdd6bdb436c150da27dfb40ccd7d7069fe0e9e53b918062727df72ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c4043bfdd6bdb436c150da27dfb40ccd7d7069fe0e9e53b918062727df72ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:43:24 compute-0 podman[269295]: 2025-11-25 20:43:24.80716752 +0000 UTC m=+0.171994771 container init 97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:43:24 compute-0 podman[269295]: 2025-11-25 20:43:24.820032478 +0000 UTC m=+0.184859719 container start 97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:43:24 compute-0 podman[269295]: 2025-11-25 20:43:24.85941846 +0000 UTC m=+0.224245711 container attach 97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:43:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1154: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:25 compute-0 friendly_ride[269311]: {
Nov 25 20:43:25 compute-0 friendly_ride[269311]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "osd_id": 2,
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "type": "bluestore"
Nov 25 20:43:25 compute-0 friendly_ride[269311]:     },
Nov 25 20:43:25 compute-0 friendly_ride[269311]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "osd_id": 1,
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "type": "bluestore"
Nov 25 20:43:25 compute-0 friendly_ride[269311]:     },
Nov 25 20:43:25 compute-0 friendly_ride[269311]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "osd_id": 0,
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:43:25 compute-0 friendly_ride[269311]:         "type": "bluestore"
Nov 25 20:43:25 compute-0 friendly_ride[269311]:     }
Nov 25 20:43:25 compute-0 friendly_ride[269311]: }
Nov 25 20:43:25 compute-0 ceph-mon[75144]: pgmap v1154: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:25 compute-0 systemd[1]: libpod-97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78.scope: Deactivated successfully.
Nov 25 20:43:25 compute-0 systemd[1]: libpod-97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78.scope: Consumed 1.123s CPU time.
Nov 25 20:43:25 compute-0 podman[269295]: 2025-11-25 20:43:25.934965489 +0000 UTC m=+1.299792730 container died 97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ride, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5c4043bfdd6bdb436c150da27dfb40ccd7d7069fe0e9e53b918062727df72ac-merged.mount: Deactivated successfully.
Nov 25 20:43:26 compute-0 podman[269295]: 2025-11-25 20:43:26.021568966 +0000 UTC m=+1.386396207 container remove 97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:43:26 compute-0 systemd[1]: libpod-conmon-97e98d965bdfb5e32daa8cf9b02cfb18e520d3c35e5f85ae34d35ee3a4297e78.scope: Deactivated successfully.
Nov 25 20:43:26 compute-0 sudo[269189]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:43:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:43:26 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:43:26 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:43:26 compute-0 sudo[269356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:43:26 compute-0 sudo[269356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:26 compute-0 sudo[269356]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:26 compute-0 sudo[269381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:43:26 compute-0 sudo[269381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:43:26 compute-0 sudo[269381]: pam_unix(sudo:session): session closed for user root
Nov 25 20:43:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:43:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:43:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:43:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:43:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:43:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:43:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:43:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:43:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1155: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:28 compute-0 ceph-mon[75144]: pgmap v1155: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1156: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:29 compute-0 ceph-mon[75144]: pgmap v1156: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:30 compute-0 podman[269406]: 2025-11-25 20:43:30.015230786 +0000 UTC m=+0.110282496 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 20:43:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1157: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:31 compute-0 ceph-mon[75144]: pgmap v1157: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:31 compute-0 podman[269427]: 2025-11-25 20:43:31.962768632 +0000 UTC m=+0.054464271 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 20:43:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1158: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:33 compute-0 ceph-mon[75144]: pgmap v1158: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1159: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:35 compute-0 ceph-mon[75144]: pgmap v1159: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1160: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:37 compute-0 ceph-mon[75144]: pgmap v1160: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:39 compute-0 podman[269448]: 2025-11-25 20:43:39.017542403 +0000 UTC m=+0.106860114 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 20:43:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1161: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:39 compute-0 ceph-mon[75144]: pgmap v1161: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1162: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:41 compute-0 ceph-mon[75144]: pgmap v1162: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1163: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:43 compute-0 ceph-mon[75144]: pgmap v1163: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:44 compute-0 nova_compute[248866]: 2025-11-25 20:43:44.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:45 compute-0 nova_compute[248866]: 2025-11-25 20:43:45.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1164: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:45 compute-0 ceph-mon[75144]: pgmap v1164: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:46 compute-0 nova_compute[248866]: 2025-11-25 20:43:46.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1165: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:47 compute-0 ceph-mon[75144]: pgmap v1165: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:48 compute-0 nova_compute[248866]: 2025-11-25 20:43:48.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:43:48.959 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:43:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:43:48.959 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:43:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:43:48.959 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.095 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.096 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.096 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.096 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.096 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:43:49 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:43:49 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3367335533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.530 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:43:49 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3367335533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.703 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.704 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5299MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.704 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.705 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:43:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1166: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.802 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.802 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:43:49 compute-0 nova_compute[248866]: 2025-11-25 20:43:49.818 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:43:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:43:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094567738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:43:50 compute-0 nova_compute[248866]: 2025-11-25 20:43:50.255 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:43:50 compute-0 nova_compute[248866]: 2025-11-25 20:43:50.263 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:43:50 compute-0 nova_compute[248866]: 2025-11-25 20:43:50.280 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:43:50 compute-0 nova_compute[248866]: 2025-11-25 20:43:50.281 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:43:50 compute-0 nova_compute[248866]: 2025-11-25 20:43:50.281 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:43:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:50 compute-0 ceph-mon[75144]: pgmap v1166: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:50 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4094567738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:43:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1167: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:51 compute-0 ceph-mon[75144]: pgmap v1167: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:52 compute-0 nova_compute[248866]: 2025-11-25 20:43:52.282 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:52 compute-0 nova_compute[248866]: 2025-11-25 20:43:52.283 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:53 compute-0 nova_compute[248866]: 2025-11-25 20:43:53.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:53 compute-0 nova_compute[248866]: 2025-11-25 20:43:53.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:43:53 compute-0 nova_compute[248866]: 2025-11-25 20:43:53.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:43:53 compute-0 nova_compute[248866]: 2025-11-25 20:43:53.059 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:43:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1168: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:54 compute-0 ceph-mon[75144]: pgmap v1168: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:55 compute-0 nova_compute[248866]: 2025-11-25 20:43:55.054 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:43:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:43:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1169: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:55 compute-0 ceph-mon[75144]: pgmap v1169: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:43:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:43:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:43:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:43:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:43:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:43:57
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'backups']
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:43:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1170: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:58 compute-0 ceph-mon[75144]: pgmap v1170: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1171: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:43:59 compute-0 ceph-mon[75144]: pgmap v1171: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:00 compute-0 podman[269518]: 2025-11-25 20:44:00.984190586 +0000 UTC m=+0.072593590 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:44:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1172: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:01 compute-0 ceph-mon[75144]: pgmap v1172: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:44:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:44:02 compute-0 podman[269538]: 2025-11-25 20:44:02.987066354 +0000 UTC m=+0.073588126 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:44:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1173: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:03 compute-0 ceph-mon[75144]: pgmap v1173: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1174: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:05 compute-0 ceph-mon[75144]: pgmap v1174: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1175: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:07 compute-0 ceph-mon[75144]: pgmap v1175: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1176: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:09 compute-0 ceph-mon[75144]: pgmap v1176: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:10 compute-0 podman[269560]: 2025-11-25 20:44:10.06126648 +0000 UTC m=+0.154083058 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 20:44:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1177: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:11 compute-0 ceph-mon[75144]: pgmap v1177: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1178: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:13 compute-0 ceph-mon[75144]: pgmap v1178: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1179: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:15 compute-0 ceph-mon[75144]: pgmap v1179: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:44:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457171328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:44:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:44:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457171328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:44:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/457171328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:44:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/457171328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:44:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1180: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:18 compute-0 ceph-mon[75144]: pgmap v1180: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1181: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:19 compute-0 ceph-mon[75144]: pgmap v1181: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1182: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:21 compute-0 ceph-mon[75144]: pgmap v1182: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1183: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:23 compute-0 ceph-mon[75144]: pgmap v1183: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1184: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:25 compute-0 ceph-mon[75144]: pgmap v1184: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:26 compute-0 sudo[269587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:26 compute-0 sudo[269587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:26 compute-0 sudo[269587]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:26 compute-0 sudo[269612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:44:26 compute-0 sudo[269612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:26 compute-0 sudo[269612]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:26 compute-0 sudo[269637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:26 compute-0 sudo[269637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:26 compute-0 sudo[269637]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:26 compute-0 sudo[269662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:44:26 compute-0 sudo[269662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:44:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:44:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:44:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:44:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:44:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:44:27 compute-0 sudo[269662]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:44:27 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:44:27 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:44:27 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:44:27 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 36750403-d2f1-4494-928f-57b196904596 does not exist
Nov 25 20:44:27 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 539465ac-5544-45aa-87d3-d9895b063ca3 does not exist
Nov 25 20:44:27 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d6aa592f-dd6f-4576-8937-11421b913121 does not exist
Nov 25 20:44:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:44:27 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:44:27 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:44:27 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:44:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:44:27 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:44:27 compute-0 sudo[269718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:27 compute-0 sudo[269718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:27 compute-0 sudo[269718]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:27 compute-0 sudo[269743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:44:27 compute-0 sudo[269743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:27 compute-0 sudo[269743]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:27 compute-0 sudo[269768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:27 compute-0 sudo[269768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:27 compute-0 sudo[269768]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:27 compute-0 sudo[269793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:44:27 compute-0 sudo[269793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1185: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.15925121 +0000 UTC m=+0.062644631 container create 4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:44:28 compute-0 systemd[1]: Started libpod-conmon-4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271.scope.
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.127855523 +0000 UTC m=+0.031248984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:44:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.273300037 +0000 UTC m=+0.176693508 container init 4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.286153163 +0000 UTC m=+0.189546594 container start 4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.290309296 +0000 UTC m=+0.193702717 container attach 4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:44:28 compute-0 boring_hugle[269874]: 167 167
Nov 25 20:44:28 compute-0 systemd[1]: libpod-4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271.scope: Deactivated successfully.
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.294649203 +0000 UTC m=+0.198042624 container died 4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:44:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5db174421db9a8e49d8d8cfda544f1dce78bb428bb12f11c90e8787ba56c83a-merged.mount: Deactivated successfully.
Nov 25 20:44:28 compute-0 podman[269858]: 2025-11-25 20:44:28.352033331 +0000 UTC m=+0.255426752 container remove 4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:44:28 compute-0 ceph-mon[75144]: pgmap v1185: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:28 compute-0 systemd[1]: libpod-conmon-4cf49e69f57e0e3ca04af3f1c4b4a6596f184ccbbe372174b9f6cc0ba37c6271.scope: Deactivated successfully.
Nov 25 20:44:28 compute-0 podman[269897]: 2025-11-25 20:44:28.580216857 +0000 UTC m=+0.062594509 container create 5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:44:28 compute-0 systemd[1]: Started libpod-conmon-5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1.scope.
Nov 25 20:44:28 compute-0 podman[269897]: 2025-11-25 20:44:28.554116664 +0000 UTC m=+0.036494316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:44:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b02df9629a0baba3ff0bc4c6f22c2b39ad21704d01e84b3804165aa102b9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b02df9629a0baba3ff0bc4c6f22c2b39ad21704d01e84b3804165aa102b9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b02df9629a0baba3ff0bc4c6f22c2b39ad21704d01e84b3804165aa102b9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b02df9629a0baba3ff0bc4c6f22c2b39ad21704d01e84b3804165aa102b9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b02df9629a0baba3ff0bc4c6f22c2b39ad21704d01e84b3804165aa102b9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:28 compute-0 podman[269897]: 2025-11-25 20:44:28.686634179 +0000 UTC m=+0.169011881 container init 5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:44:28 compute-0 podman[269897]: 2025-11-25 20:44:28.699528927 +0000 UTC m=+0.181906579 container start 5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:44:28 compute-0 podman[269897]: 2025-11-25 20:44:28.704143541 +0000 UTC m=+0.186521253 container attach 5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:44:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1186: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:29 compute-0 ceph-mon[75144]: pgmap v1186: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:29 compute-0 gallant_mccarthy[269913]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:44:29 compute-0 gallant_mccarthy[269913]: --> relative data size: 1.0
Nov 25 20:44:29 compute-0 gallant_mccarthy[269913]: --> All data devices are unavailable
Nov 25 20:44:29 compute-0 systemd[1]: libpod-5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1.scope: Deactivated successfully.
Nov 25 20:44:29 compute-0 systemd[1]: libpod-5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1.scope: Consumed 1.115s CPU time.
Nov 25 20:44:29 compute-0 podman[269897]: 2025-11-25 20:44:29.871015344 +0000 UTC m=+1.353392996 container died 5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mccarthy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f58b02df9629a0baba3ff0bc4c6f22c2b39ad21704d01e84b3804165aa102b9f-merged.mount: Deactivated successfully.
Nov 25 20:44:29 compute-0 podman[269897]: 2025-11-25 20:44:29.969569893 +0000 UTC m=+1.451947515 container remove 5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:44:29 compute-0 systemd[1]: libpod-conmon-5b50b66b5c734403f2f3305dda0dc7c96421673df5b3cb55e396d2901329faa1.scope: Deactivated successfully.
Nov 25 20:44:30 compute-0 sudo[269793]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:30 compute-0 sudo[269956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:30 compute-0 sudo[269956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:30 compute-0 sudo[269956]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:30 compute-0 sudo[269981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:44:30 compute-0 sudo[269981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:30 compute-0 sudo[269981]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:30 compute-0 sudo[270006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:30 compute-0 sudo[270006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:30 compute-0 sudo[270006]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:30 compute-0 sudo[270031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:44:30 compute-0 sudo[270031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.798615772 +0000 UTC m=+0.065422206 container create cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:44:30 compute-0 systemd[1]: Started libpod-conmon-cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317.scope.
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.765172469 +0000 UTC m=+0.031978953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:44:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.890658735 +0000 UTC m=+0.157465149 container init cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.902972137 +0000 UTC m=+0.169778531 container start cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.90641231 +0000 UTC m=+0.173218724 container attach cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:44:30 compute-0 hopeful_ishizaka[270112]: 167 167
Nov 25 20:44:30 compute-0 systemd[1]: libpod-cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317.scope: Deactivated successfully.
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.910739157 +0000 UTC m=+0.177545591 container died cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aa0b5700d161e9bba773e002144259a0ff81c61599d260a7845741393d6197e-merged.mount: Deactivated successfully.
Nov 25 20:44:30 compute-0 podman[270096]: 2025-11-25 20:44:30.96719554 +0000 UTC m=+0.234001934 container remove cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:44:30 compute-0 systemd[1]: libpod-conmon-cdc68474f041587c00616ff5b69ac6fe1b6b167758ad80a5b1b0dffadf27e317.scope: Deactivated successfully.
Nov 25 20:44:31 compute-0 podman[270136]: 2025-11-25 20:44:31.200536726 +0000 UTC m=+0.065243262 container create 96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 20:44:31 compute-0 systemd[1]: Started libpod-conmon-96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973.scope.
Nov 25 20:44:31 compute-0 podman[270136]: 2025-11-25 20:44:31.174416881 +0000 UTC m=+0.039123467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:44:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa595ca9a46f5ceeb2bd8808c606a4cee26164b57302f8a828e05eeed8739c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa595ca9a46f5ceeb2bd8808c606a4cee26164b57302f8a828e05eeed8739c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa595ca9a46f5ceeb2bd8808c606a4cee26164b57302f8a828e05eeed8739c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa595ca9a46f5ceeb2bd8808c606a4cee26164b57302f8a828e05eeed8739c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:31 compute-0 podman[270136]: 2025-11-25 20:44:31.314182902 +0000 UTC m=+0.178889438 container init 96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:44:31 compute-0 podman[270136]: 2025-11-25 20:44:31.328081877 +0000 UTC m=+0.192788373 container start 96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:44:31 compute-0 podman[270136]: 2025-11-25 20:44:31.332692142 +0000 UTC m=+0.197398648 container attach 96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:44:31 compute-0 podman[270150]: 2025-11-25 20:44:31.356351979 +0000 UTC m=+0.105331472 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 20:44:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1187: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:31 compute-0 ceph-mon[75144]: pgmap v1187: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:32 compute-0 youthful_feistel[270158]: {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:     "0": [
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:         {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "devices": [
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "/dev/loop3"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             ],
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_name": "ceph_lv0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_size": "21470642176",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "name": "ceph_lv0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "tags": {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cluster_name": "ceph",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.crush_device_class": "",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.encrypted": "0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osd_id": "0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.type": "block",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.vdo": "0"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             },
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "type": "block",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "vg_name": "ceph_vg0"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:         }
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:     ],
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:     "1": [
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:         {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "devices": [
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "/dev/loop4"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             ],
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_name": "ceph_lv1",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_size": "21470642176",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "name": "ceph_lv1",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "tags": {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cluster_name": "ceph",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.crush_device_class": "",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.encrypted": "0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osd_id": "1",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.type": "block",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.vdo": "0"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             },
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "type": "block",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "vg_name": "ceph_vg1"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:         }
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:     ],
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:     "2": [
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:         {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "devices": [
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "/dev/loop5"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             ],
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_name": "ceph_lv2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_size": "21470642176",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "name": "ceph_lv2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "tags": {
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.cluster_name": "ceph",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.crush_device_class": "",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.encrypted": "0",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osd_id": "2",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.type": "block",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:                 "ceph.vdo": "0"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             },
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "type": "block",
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:             "vg_name": "ceph_vg2"
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:         }
Nov 25 20:44:32 compute-0 youthful_feistel[270158]:     ]
Nov 25 20:44:32 compute-0 youthful_feistel[270158]: }
Nov 25 20:44:32 compute-0 systemd[1]: libpod-96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973.scope: Deactivated successfully.
Nov 25 20:44:32 compute-0 podman[270136]: 2025-11-25 20:44:32.076602512 +0000 UTC m=+0.941309058 container died 96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa595ca9a46f5ceeb2bd8808c606a4cee26164b57302f8a828e05eeed8739c20-merged.mount: Deactivated successfully.
Nov 25 20:44:32 compute-0 podman[270136]: 2025-11-25 20:44:32.681634306 +0000 UTC m=+1.546340852 container remove 96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 20:44:32 compute-0 sudo[270031]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:32 compute-0 systemd[1]: libpod-conmon-96168aabc8876847ebdb279078ab2e52eedfb624de2e20804b83fa57a6abf973.scope: Deactivated successfully.
Nov 25 20:44:32 compute-0 sudo[270194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:32 compute-0 sudo[270194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:32 compute-0 sudo[270194]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:32 compute-0 sudo[270220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:44:32 compute-0 sudo[270220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:32 compute-0 sudo[270220]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:33 compute-0 sudo[270245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:33 compute-0 sudo[270245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:33 compute-0 sudo[270245]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:33 compute-0 sudo[270276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:44:33 compute-0 sudo[270276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:33 compute-0 podman[270269]: 2025-11-25 20:44:33.154190436 +0000 UTC m=+0.082185608 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.660052385 +0000 UTC m=+0.063062713 container create 7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 20:44:33 compute-0 systemd[1]: Started libpod-conmon-7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2.scope.
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.635981975 +0000 UTC m=+0.038992373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:44:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1188: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.773773633 +0000 UTC m=+0.176784031 container init 7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.784347018 +0000 UTC m=+0.187357386 container start 7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.78922852 +0000 UTC m=+0.192238878 container attach 7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:44:33 compute-0 elegant_hodgkin[270372]: 167 167
Nov 25 20:44:33 compute-0 systemd[1]: libpod-7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2.scope: Deactivated successfully.
Nov 25 20:44:33 compute-0 conmon[270372]: conmon 7ab849830cf1e84600b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2.scope/container/memory.events
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.793227758 +0000 UTC m=+0.196238106 container died 7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:44:33 compute-0 ceph-mon[75144]: pgmap v1188: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-671a07b31381bdc20343b8376f6dd71c5a202232b973d65d66fa2a2c7604c4dc-merged.mount: Deactivated successfully.
Nov 25 20:44:33 compute-0 podman[270356]: 2025-11-25 20:44:33.850940175 +0000 UTC m=+0.253950523 container remove 7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:44:33 compute-0 systemd[1]: libpod-conmon-7ab849830cf1e84600b67d203468fd4a27d3d3344d5f324566b8cf7860b7aba2.scope: Deactivated successfully.
Nov 25 20:44:34 compute-0 podman[270397]: 2025-11-25 20:44:34.088962027 +0000 UTC m=+0.063745161 container create cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:44:34 compute-0 systemd[1]: Started libpod-conmon-cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa.scope.
Nov 25 20:44:34 compute-0 podman[270397]: 2025-11-25 20:44:34.0612703 +0000 UTC m=+0.036053474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:44:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5beb618820be38dcecef9c647fd2b6cb284938c38fcc07d6949deef043adc7c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5beb618820be38dcecef9c647fd2b6cb284938c38fcc07d6949deef043adc7c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5beb618820be38dcecef9c647fd2b6cb284938c38fcc07d6949deef043adc7c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5beb618820be38dcecef9c647fd2b6cb284938c38fcc07d6949deef043adc7c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:44:34 compute-0 podman[270397]: 2025-11-25 20:44:34.202431478 +0000 UTC m=+0.177214662 container init cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:44:34 compute-0 podman[270397]: 2025-11-25 20:44:34.216918819 +0000 UTC m=+0.191701963 container start cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:44:34 compute-0 podman[270397]: 2025-11-25 20:44:34.220750993 +0000 UTC m=+0.195534137 container attach cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:44:35 compute-0 eager_faraday[270413]: {
Nov 25 20:44:35 compute-0 eager_faraday[270413]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "osd_id": 2,
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "type": "bluestore"
Nov 25 20:44:35 compute-0 eager_faraday[270413]:     },
Nov 25 20:44:35 compute-0 eager_faraday[270413]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "osd_id": 1,
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "type": "bluestore"
Nov 25 20:44:35 compute-0 eager_faraday[270413]:     },
Nov 25 20:44:35 compute-0 eager_faraday[270413]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "osd_id": 0,
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:44:35 compute-0 eager_faraday[270413]:         "type": "bluestore"
Nov 25 20:44:35 compute-0 eager_faraday[270413]:     }
Nov 25 20:44:35 compute-0 eager_faraday[270413]: }
Nov 25 20:44:35 compute-0 systemd[1]: libpod-cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa.scope: Deactivated successfully.
Nov 25 20:44:35 compute-0 podman[270397]: 2025-11-25 20:44:35.41188858 +0000 UTC m=+1.386671684 container died cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:44:35 compute-0 systemd[1]: libpod-cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa.scope: Consumed 1.198s CPU time.
Nov 25 20:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5beb618820be38dcecef9c647fd2b6cb284938c38fcc07d6949deef043adc7c8-merged.mount: Deactivated successfully.
Nov 25 20:44:35 compute-0 podman[270397]: 2025-11-25 20:44:35.488857906 +0000 UTC m=+1.463641050 container remove cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:44:35 compute-0 systemd[1]: libpod-conmon-cb852f7b0cf589a7e9fc964fa4da6ce95c02d787c6ee1ff3bcdaacb58dec92fa.scope: Deactivated successfully.
Nov 25 20:44:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:35 compute-0 sudo[270276]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:44:35 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:44:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:44:35 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:44:35 compute-0 sudo[270458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:44:35 compute-0 sudo[270458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:35 compute-0 sudo[270458]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:35 compute-0 sudo[270483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:44:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1189: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:35 compute-0 sudo[270483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:44:35 compute-0 sudo[270483]: pam_unix(sudo:session): session closed for user root
Nov 25 20:44:36 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:44:36 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:44:36 compute-0 ceph-mon[75144]: pgmap v1189: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1190: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:37 compute-0 ceph-mon[75144]: pgmap v1190: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.829703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103477829830, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1671, "num_deletes": 251, "total_data_size": 1768928, "memory_usage": 1811840, "flush_reason": "Manual Compaction"}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103477845655, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1725256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23020, "largest_seqno": 24690, "table_properties": {"data_size": 1717612, "index_size": 4591, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15694, "raw_average_key_size": 19, "raw_value_size": 1702244, "raw_average_value_size": 2162, "num_data_blocks": 207, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103301, "oldest_key_time": 1764103301, "file_creation_time": 1764103477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 15998 microseconds, and 9036 cpu microseconds.
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.845717) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1725256 bytes OK
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.845744) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.848783) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.848831) EVENT_LOG_v1 {"time_micros": 1764103477848823, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.848853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1761756, prev total WAL file size 1761756, number of live WAL files 2.
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.849839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1684KB)], [56(4776KB)]
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103477849877, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 6616167, "oldest_snapshot_seqno": -1}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4055 keys, 5464040 bytes, temperature: kUnknown
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103477896014, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 5464040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5436366, "index_size": 16428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99052, "raw_average_key_size": 24, "raw_value_size": 5362918, "raw_average_value_size": 1322, "num_data_blocks": 693, "num_entries": 4055, "num_filter_entries": 4055, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.896338) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 5464040 bytes
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.897884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 4.7 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 4569, records dropped: 514 output_compression: NoCompression
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.897912) EVENT_LOG_v1 {"time_micros": 1764103477897899, "job": 30, "event": "compaction_finished", "compaction_time_micros": 46242, "compaction_time_cpu_micros": 24389, "output_level": 6, "num_output_files": 1, "total_output_size": 5464040, "num_input_records": 4569, "num_output_records": 4055, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103477898637, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103477900356, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.849717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.900476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.900485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.900488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.900491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:44:37 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:44:37.900493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:44:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1191: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:39 compute-0 ceph-mon[75144]: pgmap v1191: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:41 compute-0 podman[270508]: 2025-11-25 20:44:41.079942857 +0000 UTC m=+0.169496384 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 20:44:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1192: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:41 compute-0 ceph-mon[75144]: pgmap v1192: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1193: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:43 compute-0 ceph-mon[75144]: pgmap v1193: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:45 compute-0 nova_compute[248866]: 2025-11-25 20:44:45.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1194: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:45 compute-0 ceph-mon[75144]: pgmap v1194: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:46 compute-0 nova_compute[248866]: 2025-11-25 20:44:46.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:46 compute-0 nova_compute[248866]: 2025-11-25 20:44:46.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1195: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:47 compute-0 ceph-mon[75144]: pgmap v1195: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:48 compute-0 nova_compute[248866]: 2025-11-25 20:44:48.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:44:48.959 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:44:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:44:48.961 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:44:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:44:48.961 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:44:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1196: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:49 compute-0 ceph-mon[75144]: pgmap v1196: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.094 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.095 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.095 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.096 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.096 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:44:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:44:50 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573549649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.570 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.758 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.759 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5274MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.760 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.760 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:44:50 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1573549649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.839 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.840 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:44:50 compute-0 nova_compute[248866]: 2025-11-25 20:44:50.864 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:44:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:44:51 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244556240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:44:51 compute-0 nova_compute[248866]: 2025-11-25 20:44:51.320 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:44:51 compute-0 nova_compute[248866]: 2025-11-25 20:44:51.325 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:44:51 compute-0 nova_compute[248866]: 2025-11-25 20:44:51.346 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:44:51 compute-0 nova_compute[248866]: 2025-11-25 20:44:51.347 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:44:51 compute-0 nova_compute[248866]: 2025-11-25 20:44:51.348 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:44:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1197: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:51 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4244556240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:44:51 compute-0 ceph-mon[75144]: pgmap v1197: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:53 compute-0 nova_compute[248866]: 2025-11-25 20:44:53.349 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:53 compute-0 nova_compute[248866]: 2025-11-25 20:44:53.349 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1198: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:53 compute-0 ceph-mon[75144]: pgmap v1198: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:54 compute-0 nova_compute[248866]: 2025-11-25 20:44:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:44:54 compute-0 nova_compute[248866]: 2025-11-25 20:44:54.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:44:54 compute-0 nova_compute[248866]: 2025-11-25 20:44:54.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:44:54 compute-0 nova_compute[248866]: 2025-11-25 20:44:54.073 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:44:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:44:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1199: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:55 compute-0 ceph-mon[75144]: pgmap v1199: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:44:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:44:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:44:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:44:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:44:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:44:57
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', '.mgr']
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:44:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1200: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:57 compute-0 ceph-mon[75144]: pgmap v1200: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1201: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:44:59 compute-0 ceph-mon[75144]: pgmap v1201: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1202: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:01 compute-0 ceph-mon[75144]: pgmap v1202: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:02 compute-0 podman[270580]: 2025-11-25 20:45:02.016433563 +0000 UTC m=+0.110027160 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:45:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:45:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1203: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:03 compute-0 ceph-mon[75144]: pgmap v1203: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:04 compute-0 podman[270599]: 2025-11-25 20:45:04.013226626 +0000 UTC m=+0.109003872 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 20:45:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1204: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:05 compute-0 ceph-mon[75144]: pgmap v1204: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1205: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:08 compute-0 ceph-mon[75144]: pgmap v1205: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1206: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:10 compute-0 ceph-mon[75144]: pgmap v1206: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:45:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5612 writes, 24K keys, 5612 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5612 writes, 5612 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1334 writes, 6021 keys, 1334 commit groups, 1.0 writes per commit group, ingest: 5.77 MB, 0.01 MB/s
                                           Interval WAL: 1334 writes, 1334 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     93.4      0.22              0.11        15    0.015       0      0       0.0       0.0
                                             L6      1/0    5.21 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    102.2     82.7      0.79              0.34        14    0.057     54K   7797       0.0       0.0
                                            Sum      1/0    5.21 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     79.9     85.0      1.01              0.45        29    0.035     54K   7797       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    114.1    117.0      0.21              0.12         8    0.027     17K   2490       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    102.2     82.7      0.79              0.34        14    0.057     54K   7797       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     94.5      0.22              0.11        14    0.016       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.04 MB/s write, 0.08 GB read, 0.03 MB/s read, 1.0 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585aba031f0#2 capacity: 308.00 MB usage: 9.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000246 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(846,8.84 MB,2.87161%) FilterBlock(30,154.55 KB,0.0490015%) IndexBlock(30,279.92 KB,0.0887536%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:45:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1207: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:12 compute-0 podman[270619]: 2025-11-25 20:45:12.014597565 +0000 UTC m=+0.101567171 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 20:45:12 compute-0 ceph-mon[75144]: pgmap v1207: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1208: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:14 compute-0 ceph-mon[75144]: pgmap v1208: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1209: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:16 compute-0 ceph-mon[75144]: pgmap v1209: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:45:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1827156531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:45:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:45:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1827156531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:45:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1210: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1827156531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:45:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1827156531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:45:19 compute-0 ceph-mon[75144]: pgmap v1210: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1211: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:21 compute-0 ceph-mon[75144]: pgmap v1211: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1212: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:23 compute-0 ceph-mon[75144]: pgmap v1212: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1213: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:25 compute-0 ceph-mon[75144]: pgmap v1213: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1214: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:45:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:45:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:45:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:45:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:45:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:45:27 compute-0 ceph-mon[75144]: pgmap v1214: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1215: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:29 compute-0 ceph-mon[75144]: pgmap v1215: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1216: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:31 compute-0 ceph-mon[75144]: pgmap v1216: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1217: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:32 compute-0 podman[270647]: 2025-11-25 20:45:32.975366317 +0000 UTC m=+0.064044959 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 20:45:33 compute-0 ceph-mon[75144]: pgmap v1217: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1218: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:34 compute-0 podman[270667]: 2025-11-25 20:45:34.966854899 +0000 UTC m=+0.066210718 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 20:45:35 compute-0 ceph-mon[75144]: pgmap v1218: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1219: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:35 compute-0 sudo[270687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:35 compute-0 sudo[270687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:35 compute-0 sudo[270687]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:35 compute-0 sudo[270712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:45:35 compute-0 sudo[270712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:35 compute-0 sudo[270712]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:36 compute-0 sudo[270737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:36 compute-0 sudo[270737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:36 compute-0 sudo[270737]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:36 compute-0 sudo[270762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:45:36 compute-0 sudo[270762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:36 compute-0 sudo[270762]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:45:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:45:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:45:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:45:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:45:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:45:36 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev b0450953-85d0-49af-9b3b-ed4a51dc7681 does not exist
Nov 25 20:45:36 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 19400578-ae3a-4ada-a3bb-d32da2ab4580 does not exist
Nov 25 20:45:36 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 058999a8-ba8b-424a-939e-13b4477c2aff does not exist
Nov 25 20:45:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:45:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:45:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:45:36 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:45:36 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:45:36 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:45:36 compute-0 sudo[270818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:36 compute-0 sudo[270818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:36 compute-0 sudo[270818]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:36 compute-0 sudo[270843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:45:36 compute-0 sudo[270843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:36 compute-0 sudo[270843]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:36 compute-0 sudo[270868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:37 compute-0 sudo[270868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:37 compute-0 sudo[270868]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:37 compute-0 sudo[270893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:45:37 compute-0 sudo[270893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:37 compute-0 ceph-mon[75144]: pgmap v1219: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:37 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:45:37 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:45:37 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:45:37 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:45:37 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:45:37 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.475784172 +0000 UTC m=+0.066631559 container create a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brattain, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:45:37 compute-0 systemd[1]: Started libpod-conmon-a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62.scope.
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.446514372 +0000 UTC m=+0.037361799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:45:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.585149243 +0000 UTC m=+0.175996610 container init a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brattain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.595258345 +0000 UTC m=+0.186105682 container start a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.598878373 +0000 UTC m=+0.189725740 container attach a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:45:37 compute-0 optimistic_brattain[270975]: 167 167
Nov 25 20:45:37 compute-0 systemd[1]: libpod-a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62.scope: Deactivated successfully.
Nov 25 20:45:37 compute-0 conmon[270975]: conmon a344e593a3b03371261a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62.scope/container/memory.events
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.604648019 +0000 UTC m=+0.195495386 container died a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brattain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c1d90fb9fde20203c931d0a5ce5ec8c986cc49731348cb5ba566b1b83f88f3c-merged.mount: Deactivated successfully.
Nov 25 20:45:37 compute-0 podman[270959]: 2025-11-25 20:45:37.692371355 +0000 UTC m=+0.283218702 container remove a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:45:37 compute-0 systemd[1]: libpod-conmon-a344e593a3b03371261ac64a2682a7e33480bd188d1a0436d777721ce30c1a62.scope: Deactivated successfully.
Nov 25 20:45:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1220: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:37 compute-0 podman[270999]: 2025-11-25 20:45:37.896624896 +0000 UTC m=+0.050208996 container create 5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:45:37 compute-0 systemd[1]: Started libpod-conmon-5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741.scope.
Nov 25 20:45:37 compute-0 podman[270999]: 2025-11-25 20:45:37.874867429 +0000 UTC m=+0.028451519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:45:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4bb23aa16d20d94296d4fecf3056380aca4604831d995c9c83a516d15b862d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4bb23aa16d20d94296d4fecf3056380aca4604831d995c9c83a516d15b862d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4bb23aa16d20d94296d4fecf3056380aca4604831d995c9c83a516d15b862d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4bb23aa16d20d94296d4fecf3056380aca4604831d995c9c83a516d15b862d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4bb23aa16d20d94296d4fecf3056380aca4604831d995c9c83a516d15b862d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:38 compute-0 podman[270999]: 2025-11-25 20:45:38.01645555 +0000 UTC m=+0.170039640 container init 5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:45:38 compute-0 podman[270999]: 2025-11-25 20:45:38.025066692 +0000 UTC m=+0.178650752 container start 5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:45:38 compute-0 podman[270999]: 2025-11-25 20:45:38.029377928 +0000 UTC m=+0.182961998 container attach 5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:45:39 compute-0 nova_compute[248866]: 2025-11-25 20:45:39.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:39 compute-0 nova_compute[248866]: 2025-11-25 20:45:39.045 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 20:45:39 compute-0 practical_maxwell[271015]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:45:39 compute-0 practical_maxwell[271015]: --> relative data size: 1.0
Nov 25 20:45:39 compute-0 practical_maxwell[271015]: --> All data devices are unavailable
Nov 25 20:45:39 compute-0 systemd[1]: libpod-5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741.scope: Deactivated successfully.
Nov 25 20:45:39 compute-0 podman[270999]: 2025-11-25 20:45:39.107930277 +0000 UTC m=+1.261514367 container died 5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:45:39 compute-0 systemd[1]: libpod-5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741.scope: Consumed 1.039s CPU time.
Nov 25 20:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb4bb23aa16d20d94296d4fecf3056380aca4604831d995c9c83a516d15b862d-merged.mount: Deactivated successfully.
Nov 25 20:45:39 compute-0 podman[270999]: 2025-11-25 20:45:39.213187328 +0000 UTC m=+1.366771428 container remove 5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:45:39 compute-0 systemd[1]: libpod-conmon-5abcb0d34cff7bf260924b3ab3bf5b603e6a790dec99928706063cbf9753c741.scope: Deactivated successfully.
Nov 25 20:45:39 compute-0 ceph-mon[75144]: pgmap v1220: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:39 compute-0 sudo[270893]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:39 compute-0 sudo[271057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:39 compute-0 sudo[271057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:39 compute-0 sudo[271057]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:39 compute-0 sudo[271082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:45:39 compute-0 sudo[271082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:39 compute-0 sudo[271082]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:39 compute-0 sudo[271107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:39 compute-0 sudo[271107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:39 compute-0 sudo[271107]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:39 compute-0 sudo[271132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:45:39 compute-0 sudo[271132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1221: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:40.010994563 +0000 UTC m=+0.042557940 container create dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:45:40 compute-0 systemd[1]: Started libpod-conmon-dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3.scope.
Nov 25 20:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:39.992332829 +0000 UTC m=+0.023896226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:40.096563771 +0000 UTC m=+0.128127238 container init dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:40.110883277 +0000 UTC m=+0.142446654 container start dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:40.115064741 +0000 UTC m=+0.146628138 container attach dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:45:40 compute-0 priceless_neumann[271211]: 167 167
Nov 25 20:45:40 compute-0 systemd[1]: libpod-dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3.scope: Deactivated successfully.
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:40.117329292 +0000 UTC m=+0.148892669 container died dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d98e4cb77b3a83702603ef106a6d852fcf5f000195bcfdfac3bf1b6f2a9141-merged.mount: Deactivated successfully.
Nov 25 20:45:40 compute-0 podman[271195]: 2025-11-25 20:45:40.158356648 +0000 UTC m=+0.189920025 container remove dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:45:40 compute-0 systemd[1]: libpod-conmon-dec2db5aed328d26763e6e1b44fcd346fbc86cd02aa6820451eed8ebc58d31a3.scope: Deactivated successfully.
Nov 25 20:45:40 compute-0 podman[271234]: 2025-11-25 20:45:40.421070787 +0000 UTC m=+0.091824169 container create cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:45:40 compute-0 podman[271234]: 2025-11-25 20:45:40.362387513 +0000 UTC m=+0.033140985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:45:40 compute-0 systemd[1]: Started libpod-conmon-cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004.scope.
Nov 25 20:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b810cbc89e0b835db9d38a52e440041340697dc941ffd8eac30cf17aadf4f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b810cbc89e0b835db9d38a52e440041340697dc941ffd8eac30cf17aadf4f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b810cbc89e0b835db9d38a52e440041340697dc941ffd8eac30cf17aadf4f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b810cbc89e0b835db9d38a52e440041340697dc941ffd8eac30cf17aadf4f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:40 compute-0 podman[271234]: 2025-11-25 20:45:40.526640585 +0000 UTC m=+0.197394007 container init cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:45:40 compute-0 podman[271234]: 2025-11-25 20:45:40.535430062 +0000 UTC m=+0.206183474 container start cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_vaughan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:45:40 compute-0 podman[271234]: 2025-11-25 20:45:40.541736782 +0000 UTC m=+0.212490254 container attach cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:45:41 compute-0 ceph-mon[75144]: pgmap v1221: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]: {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:     "0": [
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:         {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "devices": [
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "/dev/loop3"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             ],
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_name": "ceph_lv0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_size": "21470642176",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "name": "ceph_lv0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "tags": {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cluster_name": "ceph",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.crush_device_class": "",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.encrypted": "0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osd_id": "0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.type": "block",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.vdo": "0"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             },
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "type": "block",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "vg_name": "ceph_vg0"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:         }
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:     ],
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:     "1": [
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:         {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "devices": [
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "/dev/loop4"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             ],
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_name": "ceph_lv1",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_size": "21470642176",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "name": "ceph_lv1",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "tags": {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cluster_name": "ceph",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.crush_device_class": "",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.encrypted": "0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osd_id": "1",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.type": "block",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.vdo": "0"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             },
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "type": "block",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "vg_name": "ceph_vg1"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:         }
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:     ],
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:     "2": [
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:         {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "devices": [
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "/dev/loop5"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             ],
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_name": "ceph_lv2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_size": "21470642176",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "name": "ceph_lv2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "tags": {
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.cluster_name": "ceph",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.crush_device_class": "",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.encrypted": "0",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osd_id": "2",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.type": "block",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:                 "ceph.vdo": "0"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             },
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "type": "block",
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:             "vg_name": "ceph_vg2"
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:         }
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]:     ]
Nov 25 20:45:41 compute-0 heuristic_vaughan[271250]: }
Nov 25 20:45:41 compute-0 systemd[1]: libpod-cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004.scope: Deactivated successfully.
Nov 25 20:45:41 compute-0 podman[271234]: 2025-11-25 20:45:41.360643976 +0000 UTC m=+1.031397368 container died cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_vaughan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b810cbc89e0b835db9d38a52e440041340697dc941ffd8eac30cf17aadf4f1-merged.mount: Deactivated successfully.
Nov 25 20:45:41 compute-0 podman[271234]: 2025-11-25 20:45:41.423857442 +0000 UTC m=+1.094610814 container remove cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:45:41 compute-0 systemd[1]: libpod-conmon-cab89d95df5a1ad8f65f987cfc67a2d25568e3e09536f0f0c1a2139143c0d004.scope: Deactivated successfully.
Nov 25 20:45:41 compute-0 sudo[271132]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:41 compute-0 sudo[271273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:41 compute-0 sudo[271273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:41 compute-0 sudo[271273]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:41 compute-0 sudo[271298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:45:41 compute-0 sudo[271298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:41 compute-0 sudo[271298]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:41 compute-0 sudo[271323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:41 compute-0 sudo[271323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:41 compute-0 sudo[271323]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1222: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:41 compute-0 sudo[271348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:45:41 compute-0 sudo[271348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.136180911 +0000 UTC m=+0.050868223 container create 0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:45:42 compute-0 systemd[1]: Started libpod-conmon-0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01.scope.
Nov 25 20:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.109343417 +0000 UTC m=+0.024030729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.221541654 +0000 UTC m=+0.136229006 container init 0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.236031235 +0000 UTC m=+0.150718537 container start 0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.240443254 +0000 UTC m=+0.155130546 container attach 0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:45:42 compute-0 eager_visvesvaraya[271429]: 167 167
Nov 25 20:45:42 compute-0 systemd[1]: libpod-0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01.scope: Deactivated successfully.
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.243092585 +0000 UTC m=+0.157779857 container died 0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e1972eafb74c9077dcc477323c61d2ef6678138ccddb61499d19e574b7b9e42-merged.mount: Deactivated successfully.
Nov 25 20:45:42 compute-0 podman[271413]: 2025-11-25 20:45:42.301508861 +0000 UTC m=+0.216196173 container remove 0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 25 20:45:42 compute-0 systemd[1]: libpod-conmon-0c23376106d2cdcc4bd3876a187e4cff9f7763c7d83acc709a6ae725161a9f01.scope: Deactivated successfully.
Nov 25 20:45:42 compute-0 podman[271426]: 2025-11-25 20:45:42.367671307 +0000 UTC m=+0.173931424 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 20:45:42 compute-0 podman[271477]: 2025-11-25 20:45:42.515909617 +0000 UTC m=+0.062979071 container create d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:45:42 compute-0 systemd[1]: Started libpod-conmon-d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35.scope.
Nov 25 20:45:42 compute-0 podman[271477]: 2025-11-25 20:45:42.486977575 +0000 UTC m=+0.034047069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b118a07bee1bb7731a20a72d7a07d42a8448016f012809c50986825ad4e2e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b118a07bee1bb7731a20a72d7a07d42a8448016f012809c50986825ad4e2e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b118a07bee1bb7731a20a72d7a07d42a8448016f012809c50986825ad4e2e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b118a07bee1bb7731a20a72d7a07d42a8448016f012809c50986825ad4e2e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:45:42 compute-0 podman[271477]: 2025-11-25 20:45:42.63130577 +0000 UTC m=+0.178375274 container init d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:45:42 compute-0 podman[271477]: 2025-11-25 20:45:42.648401161 +0000 UTC m=+0.195470585 container start d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:45:42 compute-0 podman[271477]: 2025-11-25 20:45:42.652392148 +0000 UTC m=+0.199461602 container attach d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:45:43 compute-0 ceph-mon[75144]: pgmap v1222: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:43 compute-0 agitated_moser[271493]: {
Nov 25 20:45:43 compute-0 agitated_moser[271493]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "osd_id": 2,
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "type": "bluestore"
Nov 25 20:45:43 compute-0 agitated_moser[271493]:     },
Nov 25 20:45:43 compute-0 agitated_moser[271493]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "osd_id": 1,
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "type": "bluestore"
Nov 25 20:45:43 compute-0 agitated_moser[271493]:     },
Nov 25 20:45:43 compute-0 agitated_moser[271493]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "osd_id": 0,
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:45:43 compute-0 agitated_moser[271493]:         "type": "bluestore"
Nov 25 20:45:43 compute-0 agitated_moser[271493]:     }
Nov 25 20:45:43 compute-0 agitated_moser[271493]: }
Nov 25 20:45:43 compute-0 systemd[1]: libpod-d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35.scope: Deactivated successfully.
Nov 25 20:45:43 compute-0 systemd[1]: libpod-d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35.scope: Consumed 1.107s CPU time.
Nov 25 20:45:43 compute-0 podman[271477]: 2025-11-25 20:45:43.744653038 +0000 UTC m=+1.291722492 container died d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:45:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1223: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b118a07bee1bb7731a20a72d7a07d42a8448016f012809c50986825ad4e2e8-merged.mount: Deactivated successfully.
Nov 25 20:45:43 compute-0 podman[271477]: 2025-11-25 20:45:43.818930233 +0000 UTC m=+1.365999647 container remove d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:45:43 compute-0 systemd[1]: libpod-conmon-d4c0039f02469cf51b9e9cef45e8faa4649f2f0f845e5eba9cd4932ce41f9a35.scope: Deactivated successfully.
Nov 25 20:45:43 compute-0 sudo[271348]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:45:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:45:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:45:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:45:43 compute-0 sudo[271539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:45:43 compute-0 sudo[271539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:43 compute-0 sudo[271539]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:44 compute-0 sudo[271564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:45:44 compute-0 sudo[271564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:45:44 compute-0 sudo[271564]: pam_unix(sudo:session): session closed for user root
Nov 25 20:45:44 compute-0 ceph-mon[75144]: pgmap v1223: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:45:44 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:45:45 compute-0 nova_compute[248866]: 2025-11-25 20:45:45.062 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1224: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:46 compute-0 nova_compute[248866]: 2025-11-25 20:45:46.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:46 compute-0 ceph-mon[75144]: pgmap v1224: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:47 compute-0 nova_compute[248866]: 2025-11-25 20:45:47.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1225: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:48 compute-0 nova_compute[248866]: 2025-11-25 20:45:48.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:48 compute-0 nova_compute[248866]: 2025-11-25 20:45:48.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:48 compute-0 nova_compute[248866]: 2025-11-25 20:45:48.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 20:45:48 compute-0 nova_compute[248866]: 2025-11-25 20:45:48.065 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 20:45:48 compute-0 ceph-mon[75144]: pgmap v1225: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:45:48.961 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:45:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:45:48.961 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:45:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:45:48.962 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:45:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1226: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:50 compute-0 ceph-mon[75144]: pgmap v1226: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:51 compute-0 nova_compute[248866]: 2025-11-25 20:45:51.064 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:51 compute-0 nova_compute[248866]: 2025-11-25 20:45:51.065 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:45:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1227: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.080 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.081 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.081 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.081 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.082 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:45:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:45:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541652505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.547 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.728 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.729 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5283MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.730 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.730 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.807 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.808 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:45:52 compute-0 nova_compute[248866]: 2025-11-25 20:45:52.830 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:45:52 compute-0 ceph-mon[75144]: pgmap v1227: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:52 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1541652505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:45:53 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:45:53 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19553722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:45:53 compute-0 nova_compute[248866]: 2025-11-25 20:45:53.303 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:45:53 compute-0 nova_compute[248866]: 2025-11-25 20:45:53.311 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:45:53 compute-0 nova_compute[248866]: 2025-11-25 20:45:53.333 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:45:53 compute-0 nova_compute[248866]: 2025-11-25 20:45:53.335 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:45:53 compute-0 nova_compute[248866]: 2025-11-25 20:45:53.335 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:45:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1228: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:53 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/19553722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:45:54 compute-0 ceph-mon[75144]: pgmap v1228: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:55 compute-0 nova_compute[248866]: 2025-11-25 20:45:55.336 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:55 compute-0 nova_compute[248866]: 2025-11-25 20:45:55.337 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:45:55 compute-0 nova_compute[248866]: 2025-11-25 20:45:55.337 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:45:55 compute-0 nova_compute[248866]: 2025-11-25 20:45:55.358 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:45:55 compute-0 nova_compute[248866]: 2025-11-25 20:45:55.359 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:55 compute-0 nova_compute[248866]: 2025-11-25 20:45:55.360 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:45:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1229: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:45:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:45:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:45:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:45:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:45:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:45:56 compute-0 ceph-mon[75144]: pgmap v1229: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:45:57
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'volumes', 'images']
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:45:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1230: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:58 compute-0 nova_compute[248866]: 2025-11-25 20:45:58.061 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:45:58 compute-0 ceph-mon[75144]: pgmap v1230: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:45:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1231: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:00 compute-0 nova_compute[248866]: 2025-11-25 20:46:00.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:00 compute-0 ceph-mon[75144]: pgmap v1231: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1232: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:46:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:46:02 compute-0 ceph-mon[75144]: pgmap v1232: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1233: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:03 compute-0 podman[271633]: 2025-11-25 20:46:03.97152509 +0000 UTC m=+0.067088151 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:46:04 compute-0 ceph-mon[75144]: pgmap v1233: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1234: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:05 compute-0 podman[271652]: 2025-11-25 20:46:05.974165772 +0000 UTC m=+0.070238777 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 20:46:07 compute-0 ceph-mon[75144]: pgmap v1234: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:07 compute-0 nova_compute[248866]: 2025-11-25 20:46:07.555 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1235: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:09 compute-0 ceph-mon[75144]: pgmap v1235: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1236: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:11 compute-0 ceph-mon[75144]: pgmap v1236: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1237: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:12 compute-0 podman[271673]: 2025-11-25 20:46:12.992171201 +0000 UTC m=+0.092917117 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Nov 25 20:46:13 compute-0 ceph-mon[75144]: pgmap v1237: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1238: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:15 compute-0 ceph-mon[75144]: pgmap v1238: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1239: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:46:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2491965410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:46:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:46:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2491965410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:46:17 compute-0 ceph-mon[75144]: pgmap v1239: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2491965410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:46:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2491965410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:46:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1240: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:19 compute-0 ceph-mon[75144]: pgmap v1240: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1241: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:21 compute-0 ceph-mon[75144]: pgmap v1241: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1242: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:23 compute-0 ceph-mon[75144]: pgmap v1242: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1243: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:25 compute-0 ceph-mon[75144]: pgmap v1243: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1244: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:46:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:46:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:46:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:46:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:46:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:46:27 compute-0 ceph-mon[75144]: pgmap v1244: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1245: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:29 compute-0 ceph-mon[75144]: pgmap v1245: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1246: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:31 compute-0 ceph-mon[75144]: pgmap v1246: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1247: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:33 compute-0 ceph-mon[75144]: pgmap v1247: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1248: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:35 compute-0 podman[271700]: 2025-11-25 20:46:35.001745083 +0000 UTC m=+0.086869085 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:46:35 compute-0 ceph-mon[75144]: pgmap v1248: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1249: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:36 compute-0 podman[271719]: 2025-11-25 20:46:36.997997638 +0000 UTC m=+0.095246084 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:46:37 compute-0 ceph-mon[75144]: pgmap v1249: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1250: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:39 compute-0 ceph-mon[75144]: pgmap v1250: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1251: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:41 compute-0 ceph-mon[75144]: pgmap v1251: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1252: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:43 compute-0 ceph-mon[75144]: pgmap v1252: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1253: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:44 compute-0 podman[271740]: 2025-11-25 20:46:44.041173288 +0000 UTC m=+0.130676948 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 20:46:44 compute-0 sudo[271768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:44 compute-0 sudo[271768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:44 compute-0 sudo[271768]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:44 compute-0 sudo[271793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:46:44 compute-0 sudo[271793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:44 compute-0 sudo[271793]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:44 compute-0 sudo[271818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:44 compute-0 sudo[271818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:44 compute-0 sudo[271818]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:44 compute-0 sudo[271843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:46:44 compute-0 sudo[271843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:45 compute-0 nova_compute[248866]: 2025-11-25 20:46:45.059 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:45 compute-0 sudo[271843]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:46:45 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:46:45 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:46:45 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:46:45 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 2e0c5049-2c5c-4978-b062-9619d5309955 does not exist
Nov 25 20:46:45 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev e83249e0-a6a6-4e5b-8591-277fe02373e1 does not exist
Nov 25 20:46:45 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev cf080ec4-2f19-4095-8d34-b106f8ee2cc0 does not exist
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:46:45 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:46:45 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:46:45 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:46:45 compute-0 sudo[271900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:45 compute-0 sudo[271900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:45 compute-0 sudo[271900]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:45 compute-0 ceph-mon[75144]: pgmap v1253: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:45 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:46:45 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:46:45 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:46:45 compute-0 sudo[271925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:46:45 compute-0 sudo[271925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:45 compute-0 sudo[271925]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:45 compute-0 sudo[271950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:45 compute-0 sudo[271950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:45 compute-0 sudo[271950]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:45 compute-0 sudo[271975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:46:45 compute-0 sudo[271975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1254: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:45 compute-0 podman[272039]: 2025-11-25 20:46:45.988660974 +0000 UTC m=+0.073671366 container create 853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 20:46:46 compute-0 nova_compute[248866]: 2025-11-25 20:46:46.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:46 compute-0 systemd[1]: Started libpod-conmon-853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404.scope.
Nov 25 20:46:46 compute-0 podman[272039]: 2025-11-25 20:46:45.954691539 +0000 UTC m=+0.039701971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:46:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:46:46 compute-0 podman[272039]: 2025-11-25 20:46:46.10167557 +0000 UTC m=+0.186685942 container init 853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 20:46:46 compute-0 podman[272039]: 2025-11-25 20:46:46.114073437 +0000 UTC m=+0.199083819 container start 853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:46:46 compute-0 podman[272039]: 2025-11-25 20:46:46.118139308 +0000 UTC m=+0.203149680 container attach 853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:46:46 compute-0 practical_stonebraker[272055]: 167 167
Nov 25 20:46:46 compute-0 systemd[1]: libpod-853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404.scope: Deactivated successfully.
Nov 25 20:46:46 compute-0 podman[272039]: 2025-11-25 20:46:46.124739578 +0000 UTC m=+0.209750020 container died 853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-156d74627b414a0b3a9952cac0981f33e220853b71719fb7d8ae782b4b7f095b-merged.mount: Deactivated successfully.
Nov 25 20:46:46 compute-0 podman[272039]: 2025-11-25 20:46:46.197687583 +0000 UTC m=+0.282697945 container remove 853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:46:46 compute-0 systemd[1]: libpod-conmon-853ac701c30d9fe439960f4655e90fc5534f8ee1e578fee779bb34c2f3b3f404.scope: Deactivated successfully.
Nov 25 20:46:46 compute-0 podman[272079]: 2025-11-25 20:46:46.464486885 +0000 UTC m=+0.078833247 container create 5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:46:46 compute-0 systemd[1]: Started libpod-conmon-5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f.scope.
Nov 25 20:46:46 compute-0 podman[272079]: 2025-11-25 20:46:46.432040972 +0000 UTC m=+0.046387384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:46:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54e1f53f0c3ec9e5b6dc1ee4aa05224d094501e2fb895bc2a40e9d819efeff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54e1f53f0c3ec9e5b6dc1ee4aa05224d094501e2fb895bc2a40e9d819efeff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54e1f53f0c3ec9e5b6dc1ee4aa05224d094501e2fb895bc2a40e9d819efeff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54e1f53f0c3ec9e5b6dc1ee4aa05224d094501e2fb895bc2a40e9d819efeff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54e1f53f0c3ec9e5b6dc1ee4aa05224d094501e2fb895bc2a40e9d819efeff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:46 compute-0 podman[272079]: 2025-11-25 20:46:46.580786651 +0000 UTC m=+0.195132983 container init 5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:46:46 compute-0 podman[272079]: 2025-11-25 20:46:46.593849885 +0000 UTC m=+0.208196207 container start 5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:46:46 compute-0 podman[272079]: 2025-11-25 20:46:46.597081564 +0000 UTC m=+0.211427886 container attach 5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:46:47 compute-0 ceph-mon[75144]: pgmap v1254: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:47 compute-0 stoic_stonebraker[272095]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:46:47 compute-0 stoic_stonebraker[272095]: --> relative data size: 1.0
Nov 25 20:46:47 compute-0 stoic_stonebraker[272095]: --> All data devices are unavailable
Nov 25 20:46:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1255: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:47 compute-0 systemd[1]: libpod-5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f.scope: Deactivated successfully.
Nov 25 20:46:47 compute-0 podman[272079]: 2025-11-25 20:46:47.813523203 +0000 UTC m=+1.427869565 container died 5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:46:47 compute-0 systemd[1]: libpod-5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f.scope: Consumed 1.178s CPU time.
Nov 25 20:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a54e1f53f0c3ec9e5b6dc1ee4aa05224d094501e2fb895bc2a40e9d819efeff-merged.mount: Deactivated successfully.
Nov 25 20:46:48 compute-0 nova_compute[248866]: 2025-11-25 20:46:48.044 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:48 compute-0 podman[272079]: 2025-11-25 20:46:48.186124464 +0000 UTC m=+1.800470776 container remove 5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:46:48 compute-0 systemd[1]: libpod-conmon-5863892fd1942fedaf572664cbbcfd3a16059643547c5b8c5ccbb8cfa6c64f3f.scope: Deactivated successfully.
Nov 25 20:46:48 compute-0 sudo[271975]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:48 compute-0 sudo[272138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:48 compute-0 sudo[272138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:48 compute-0 sudo[272138]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:48 compute-0 sudo[272163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:46:48 compute-0 sudo[272163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:48 compute-0 sudo[272163]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:48 compute-0 sudo[272188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:48 compute-0 sudo[272188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:48 compute-0 sudo[272188]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:48 compute-0 sudo[272213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:46:48 compute-0 sudo[272213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:46:48.963 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:46:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:46:48.964 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:46:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:46:48.964 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:46:48 compute-0 podman[272278]: 2025-11-25 20:46:48.990130108 +0000 UTC m=+0.109707217 container create 777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:46:49 compute-0 podman[272278]: 2025-11-25 20:46:48.909146453 +0000 UTC m=+0.028723622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:46:49 compute-0 nova_compute[248866]: 2025-11-25 20:46:49.039 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:49 compute-0 systemd[1]: Started libpod-conmon-777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125.scope.
Nov 25 20:46:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:46:49 compute-0 podman[272278]: 2025-11-25 20:46:49.252383235 +0000 UTC m=+0.371960394 container init 777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:46:49 compute-0 podman[272278]: 2025-11-25 20:46:49.302516749 +0000 UTC m=+0.422093858 container start 777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:46:49 compute-0 wizardly_bohr[272295]: 167 167
Nov 25 20:46:49 compute-0 systemd[1]: libpod-777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125.scope: Deactivated successfully.
Nov 25 20:46:49 compute-0 podman[272278]: 2025-11-25 20:46:49.347079562 +0000 UTC m=+0.466656661 container attach 777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:46:49 compute-0 podman[272278]: 2025-11-25 20:46:49.348019619 +0000 UTC m=+0.467596738 container died 777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 25 20:46:49 compute-0 ceph-mon[75144]: pgmap v1255: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-53ebfc4ba2ed9f19773dcdeca3204b08e7487e779781fae24e25020ba83a86bf-merged.mount: Deactivated successfully.
Nov 25 20:46:49 compute-0 podman[272278]: 2025-11-25 20:46:49.541049162 +0000 UTC m=+0.660626271 container remove 777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:46:49 compute-0 systemd[1]: libpod-conmon-777ae09b0032c258a061c273644719f8a6d9c1e26b63e094bbce983a7eda2125.scope: Deactivated successfully.
Nov 25 20:46:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1256: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:49 compute-0 podman[272319]: 2025-11-25 20:46:49.767057103 +0000 UTC m=+0.039280620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:46:49 compute-0 podman[272319]: 2025-11-25 20:46:49.885920599 +0000 UTC m=+0.158144036 container create e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_zhukovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:46:49 compute-0 systemd[1]: Started libpod-conmon-e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb.scope.
Nov 25 20:46:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff5a657c64209808cef0e32e0dde6f97807b9c145c6c8ce3f7c35b727b66c68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff5a657c64209808cef0e32e0dde6f97807b9c145c6c8ce3f7c35b727b66c68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff5a657c64209808cef0e32e0dde6f97807b9c145c6c8ce3f7c35b727b66c68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff5a657c64209808cef0e32e0dde6f97807b9c145c6c8ce3f7c35b727b66c68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:50 compute-0 podman[272319]: 2025-11-25 20:46:50.118068897 +0000 UTC m=+0.390292404 container init e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_zhukovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:46:50 compute-0 podman[272319]: 2025-11-25 20:46:50.125262633 +0000 UTC m=+0.397486090 container start e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:46:50 compute-0 podman[272319]: 2025-11-25 20:46:50.561171398 +0000 UTC m=+0.833394855 container attach e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:46:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]: {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:     "0": [
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:         {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "devices": [
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "/dev/loop3"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             ],
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_name": "ceph_lv0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_size": "21470642176",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "name": "ceph_lv0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "tags": {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cluster_name": "ceph",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.crush_device_class": "",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.encrypted": "0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osd_id": "0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.type": "block",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.vdo": "0"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             },
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "type": "block",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "vg_name": "ceph_vg0"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:         }
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:     ],
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:     "1": [
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:         {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "devices": [
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "/dev/loop4"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             ],
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_name": "ceph_lv1",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_size": "21470642176",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "name": "ceph_lv1",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "tags": {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cluster_name": "ceph",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.crush_device_class": "",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.encrypted": "0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osd_id": "1",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.type": "block",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.vdo": "0"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             },
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "type": "block",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "vg_name": "ceph_vg1"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:         }
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:     ],
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:     "2": [
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:         {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "devices": [
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "/dev/loop5"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             ],
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_name": "ceph_lv2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_size": "21470642176",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "name": "ceph_lv2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "tags": {
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.cluster_name": "ceph",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.crush_device_class": "",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.encrypted": "0",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osd_id": "2",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.type": "block",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:                 "ceph.vdo": "0"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             },
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "type": "block",
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:             "vg_name": "ceph_vg2"
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:         }
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]:     ]
Nov 25 20:46:50 compute-0 lucid_zhukovsky[272336]: }
Nov 25 20:46:50 compute-0 systemd[1]: libpod-e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb.scope: Deactivated successfully.
Nov 25 20:46:50 compute-0 podman[272319]: 2025-11-25 20:46:50.860388391 +0000 UTC m=+1.132611878 container died e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:46:50 compute-0 ceph-mon[75144]: pgmap v1256: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:51 compute-0 nova_compute[248866]: 2025-11-25 20:46:51.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:51 compute-0 nova_compute[248866]: 2025-11-25 20:46:51.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ff5a657c64209808cef0e32e0dde6f97807b9c145c6c8ce3f7c35b727b66c68-merged.mount: Deactivated successfully.
Nov 25 20:46:51 compute-0 podman[272319]: 2025-11-25 20:46:51.34825358 +0000 UTC m=+1.620477037 container remove e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 20:46:51 compute-0 systemd[1]: libpod-conmon-e73ec24b3aeab3ea26cc97c330af51a5c034641c0a33d4f4b2de077f0fb308eb.scope: Deactivated successfully.
Nov 25 20:46:51 compute-0 sudo[272213]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:51 compute-0 sudo[272359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:51 compute-0 sudo[272359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:51 compute-0 sudo[272359]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:51 compute-0 sudo[272384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:46:51 compute-0 sudo[272384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:51 compute-0 sudo[272384]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:51 compute-0 sudo[272409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:51 compute-0 sudo[272409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:51 compute-0 sudo[272409]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1257: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:51 compute-0 sudo[272434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:46:51 compute-0 sudo[272434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.348572127 +0000 UTC m=+0.061553497 container create 2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:46:52 compute-0 systemd[1]: Started libpod-conmon-2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87.scope.
Nov 25 20:46:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.325638222 +0000 UTC m=+0.038619582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.457970194 +0000 UTC m=+0.170951614 container init 2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.471615276 +0000 UTC m=+0.184596636 container start 2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:46:52 compute-0 happy_chaplygin[272515]: 167 167
Nov 25 20:46:52 compute-0 systemd[1]: libpod-2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87.scope: Deactivated successfully.
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.525261056 +0000 UTC m=+0.238242466 container attach 2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.526268573 +0000 UTC m=+0.239249933 container died 2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c548c9eb244e8379fa53ecdb5004b26ae89403f07c928529974b89e4dd11369-merged.mount: Deactivated successfully.
Nov 25 20:46:52 compute-0 podman[272499]: 2025-11-25 20:46:52.672266146 +0000 UTC m=+0.385247476 container remove 2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:46:52 compute-0 systemd[1]: libpod-conmon-2bb774af938b16c6518306f71e08275fb4187ffaf86842cb3d2c67f8203e4a87.scope: Deactivated successfully.
Nov 25 20:46:52 compute-0 podman[272539]: 2025-11-25 20:46:52.948949767 +0000 UTC m=+0.090372090 container create 6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:46:52 compute-0 podman[272539]: 2025-11-25 20:46:52.89832917 +0000 UTC m=+0.039751543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:46:53 compute-0 systemd[1]: Started libpod-conmon-6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804.scope.
Nov 25 20:46:53 compute-0 ceph-mon[75144]: pgmap v1257: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffd67e1388b4fac1a993f5ed418546748a7e738160ea2ded78d5214216ba13f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffd67e1388b4fac1a993f5ed418546748a7e738160ea2ded78d5214216ba13f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffd67e1388b4fac1a993f5ed418546748a7e738160ea2ded78d5214216ba13f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffd67e1388b4fac1a993f5ed418546748a7e738160ea2ded78d5214216ba13f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:46:53 compute-0 podman[272539]: 2025-11-25 20:46:53.111099061 +0000 UTC m=+0.252521424 container init 6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_allen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:46:53 compute-0 podman[272539]: 2025-11-25 20:46:53.125458211 +0000 UTC m=+0.266880524 container start 6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_allen, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:46:53 compute-0 podman[272539]: 2025-11-25 20:46:53.135368972 +0000 UTC m=+0.276791285 container attach 6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:46:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1258: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.045 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.105 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.105 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.105 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.105 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.106 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:46:54 compute-0 youthful_allen[272556]: {
Nov 25 20:46:54 compute-0 youthful_allen[272556]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "osd_id": 2,
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "type": "bluestore"
Nov 25 20:46:54 compute-0 youthful_allen[272556]:     },
Nov 25 20:46:54 compute-0 youthful_allen[272556]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "osd_id": 1,
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "type": "bluestore"
Nov 25 20:46:54 compute-0 youthful_allen[272556]:     },
Nov 25 20:46:54 compute-0 youthful_allen[272556]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "osd_id": 0,
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:46:54 compute-0 youthful_allen[272556]:         "type": "bluestore"
Nov 25 20:46:54 compute-0 youthful_allen[272556]:     }
Nov 25 20:46:54 compute-0 youthful_allen[272556]: }
Nov 25 20:46:54 compute-0 systemd[1]: libpod-6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804.scope: Deactivated successfully.
Nov 25 20:46:54 compute-0 systemd[1]: libpod-6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804.scope: Consumed 1.140s CPU time.
Nov 25 20:46:54 compute-0 podman[272539]: 2025-11-25 20:46:54.254886412 +0000 UTC m=+1.396308735 container died 6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ffd67e1388b4fac1a993f5ed418546748a7e738160ea2ded78d5214216ba13f-merged.mount: Deactivated successfully.
Nov 25 20:46:54 compute-0 podman[272539]: 2025-11-25 20:46:54.546049607 +0000 UTC m=+1.687471930 container remove 6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:46:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:46:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156922493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:46:54 compute-0 systemd[1]: libpod-conmon-6fde313a8c32c3fb81e398fba8f469dd256e83f94eaefb14fbdaa16614e67804.scope: Deactivated successfully.
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.575 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:46:54 compute-0 sudo[272434]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:46:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:46:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:46:54 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:46:54 compute-0 sudo[272622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:46:54 compute-0 sudo[272622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:54 compute-0 sudo[272622]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.800 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.802 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5268MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.802 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:46:54 compute-0 nova_compute[248866]: 2025-11-25 20:46:54.803 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:46:54 compute-0 sudo[272647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:46:54 compute-0 sudo[272647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:46:54 compute-0 sudo[272647]: pam_unix(sudo:session): session closed for user root
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.010 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.011 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.089 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing inventories for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 20:46:55 compute-0 ceph-mon[75144]: pgmap v1258: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:55 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1156922493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:46:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:46:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.145 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating ProviderTree inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.145 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.159 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing aggregate associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.180 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing trait associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, traits: HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.195 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:46:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:46:55 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384540208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.636 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.643 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.667 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.669 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:46:55 compute-0 nova_compute[248866]: 2025-11-25 20:46:55.669 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:46:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:46:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1259: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:46:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:46:56 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1384540208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:46:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:46:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:46:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:46:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:46:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:46:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:46:57
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'vms', 'cephfs.cephfs.meta']
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:46:57 compute-0 ceph-mon[75144]: pgmap v1259: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:57 compute-0 nova_compute[248866]: 2025-11-25 20:46:57.667 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:57 compute-0 nova_compute[248866]: 2025-11-25 20:46:57.668 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:46:57 compute-0 nova_compute[248866]: 2025-11-25 20:46:57.668 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:46:57 compute-0 nova_compute[248866]: 2025-11-25 20:46:57.697 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:46:57 compute-0 nova_compute[248866]: 2025-11-25 20:46:57.698 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:46:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1260: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:59 compute-0 ceph-mon[75144]: pgmap v1260: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:46:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1261: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:01 compute-0 ceph-mon[75144]: pgmap v1261: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:47:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:47:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1262: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:47:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:47:03 compute-0 ceph-mon[75144]: pgmap v1262: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1263: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Check health
Nov 25 20:47:05 compute-0 ceph-mon[75144]: pgmap v1263: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1264: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:05 compute-0 podman[272694]: 2025-11-25 20:47:05.99455304 +0000 UTC m=+0.083016401 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:47:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:47:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:47:07 compute-0 ceph-mon[75144]: pgmap v1264: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1265: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:07 compute-0 podman[272713]: 2025-11-25 20:47:07.995039849 +0000 UTC m=+0.097976538 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 25 20:47:09 compute-0 ceph-mon[75144]: pgmap v1265: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1266: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:11 compute-0 ceph-mon[75144]: pgmap v1266: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1267: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:13 compute-0 ceph-mon[75144]: pgmap v1267: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1268: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:15 compute-0 podman[272733]: 2025-11-25 20:47:15.024279301 +0000 UTC m=+0.118635650 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:47:15 compute-0 ceph-mon[75144]: pgmap v1268: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1269: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:17 compute-0 ceph-mon[75144]: pgmap v1269: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1270: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:19 compute-0 ceph-mon[75144]: pgmap v1270: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1271: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:21 compute-0 ceph-mon[75144]: pgmap v1271: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1272: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:23 compute-0 ceph-mon[75144]: pgmap v1272: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1273: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:25 compute-0 ceph-mon[75144]: pgmap v1273: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1274: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:47:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:47:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:47:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:47:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:47:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:47:27 compute-0 ceph-mon[75144]: pgmap v1274: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1275: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:29 compute-0 ceph-mon[75144]: pgmap v1275: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1276: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:31 compute-0 ceph-mon[75144]: pgmap v1276: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1277: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:33 compute-0 ceph-mon[75144]: pgmap v1277: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1278: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:35 compute-0 ceph-mon[75144]: pgmap v1278: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1279: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:37 compute-0 podman[272759]: 2025-11-25 20:47:37.026116114 +0000 UTC m=+0.111833405 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 20:47:37 compute-0 ceph-mon[75144]: pgmap v1279: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1280: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:38 compute-0 podman[272779]: 2025-11-25 20:47:38.984210038 +0000 UTC m=+0.072992817 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 20:47:39 compute-0 ceph-mon[75144]: pgmap v1280: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1281: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:41 compute-0 ceph-mon[75144]: pgmap v1281: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1282: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:43 compute-0 ceph-mon[75144]: pgmap v1282: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1283: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:45 compute-0 ceph-mon[75144]: pgmap v1283: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1284: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:46 compute-0 podman[272800]: 2025-11-25 20:47:46.027591473 +0000 UTC m=+0.121928059 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 20:47:46 compute-0 nova_compute[248866]: 2025-11-25 20:47:46.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:46 compute-0 nova_compute[248866]: 2025-11-25 20:47:46.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:47 compute-0 ceph-mon[75144]: pgmap v1284: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1285: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:47:48.964 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:47:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:47:48.965 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:47:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:47:48.966 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:47:49 compute-0 nova_compute[248866]: 2025-11-25 20:47:49.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:49 compute-0 ceph-mon[75144]: pgmap v1285: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1286: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:51 compute-0 nova_compute[248866]: 2025-11-25 20:47:51.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:51 compute-0 ceph-mon[75144]: pgmap v1286: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1287: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:52 compute-0 nova_compute[248866]: 2025-11-25 20:47:52.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:52 compute-0 nova_compute[248866]: 2025-11-25 20:47:52.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:47:53 compute-0 ceph-mon[75144]: pgmap v1287: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1288: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.081 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.082 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.082 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.082 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.082 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:47:54 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:47:54 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144481293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.502 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:47:54 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4144481293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.745 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.748 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5305MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.748 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.749 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.835 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.836 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:47:54 compute-0 nova_compute[248866]: 2025-11-25 20:47:54.864 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:47:54 compute-0 sudo[272848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:54 compute-0 sudo[272848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:54 compute-0 sudo[272848]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 sudo[272874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:47:55 compute-0 sudo[272874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:55 compute-0 sudo[272874]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 sudo[272918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:55 compute-0 sudo[272918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:55 compute-0 sudo[272918]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 sudo[272943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 20:47:55 compute-0 sudo[272943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:47:55 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166230692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:47:55 compute-0 nova_compute[248866]: 2025-11-25 20:47:55.337 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:47:55 compute-0 nova_compute[248866]: 2025-11-25 20:47:55.346 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:47:55 compute-0 nova_compute[248866]: 2025-11-25 20:47:55.365 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:47:55 compute-0 nova_compute[248866]: 2025-11-25 20:47:55.368 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:47:55 compute-0 nova_compute[248866]: 2025-11-25 20:47:55.368 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:47:55 compute-0 sudo[272943]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:47:55 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:47:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:47:55 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:47:55 compute-0 sudo[272990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:55 compute-0 sudo[272990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:55 compute-0 sudo[272990]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 ceph-mon[75144]: pgmap v1288: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:55 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4166230692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:47:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:47:55 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:47:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:47:55 compute-0 sudo[273015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:47:55 compute-0 sudo[273015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:55 compute-0 sudo[273015]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 sudo[273040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:55 compute-0 sudo[273040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:55 compute-0 sudo[273040]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1289: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:55 compute-0 sudo[273065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:47:55 compute-0 sudo[273065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:56 compute-0 sudo[273065]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:47:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:47:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:47:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 3d1cda6a-c9ef-4907-be36-51ba82265547 does not exist
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 569915e7-840f-4b2d-ab81-7d863b4c6bfa does not exist
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5331bb1b-027a-4ff6-8636-cfb74019f703 does not exist
Nov 25 20:47:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:47:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:47:56 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:47:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:47:56 compute-0 sudo[273121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:56 compute-0 sudo[273121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:56 compute-0 sudo[273121]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:47:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:47:56 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:47:56 compute-0 sudo[273146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:47:56 compute-0 sudo[273146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:56 compute-0 sudo[273146]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:56 compute-0 sudo[273171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:56 compute-0 sudo[273171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:56 compute-0 sudo[273171]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:47:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:47:56 compute-0 sudo[273196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:47:56 compute-0 sudo[273196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:47:57
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', 'backups']
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:47:57 compute-0 nova_compute[248866]: 2025-11-25 20:47:57.368 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.417251756 +0000 UTC m=+0.064944899 container create 6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:47:57 compute-0 systemd[1]: Started libpod-conmon-6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186.scope.
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.388768101 +0000 UTC m=+0.036461284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:47:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.526406637 +0000 UTC m=+0.174099830 container init 6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.538742412 +0000 UTC m=+0.186435515 container start 6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.542373581 +0000 UTC m=+0.190066774 container attach 6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_raman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:47:57 compute-0 friendly_raman[273279]: 167 167
Nov 25 20:47:57 compute-0 systemd[1]: libpod-6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186.scope: Deactivated successfully.
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.54818819 +0000 UTC m=+0.195881363 container died 6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9dcf809a33a3f9d49f3e229b4f764d5953ebb1ad146891e6403e2aebecde5e9-merged.mount: Deactivated successfully.
Nov 25 20:47:57 compute-0 podman[273262]: 2025-11-25 20:47:57.60696838 +0000 UTC m=+0.254661513 container remove 6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:47:57 compute-0 systemd[1]: libpod-conmon-6c0a69034ce042c14398603184e4603466f2a584a51e22c195b126dcd28e6186.scope: Deactivated successfully.
Nov 25 20:47:57 compute-0 ceph-mon[75144]: pgmap v1289: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1290: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:57 compute-0 podman[273305]: 2025-11-25 20:47:57.857687513 +0000 UTC m=+0.077049557 container create db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:47:57 compute-0 systemd[1]: Started libpod-conmon-db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d.scope.
Nov 25 20:47:57 compute-0 podman[273305]: 2025-11-25 20:47:57.827740839 +0000 UTC m=+0.047102993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:47:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b604d205ab21125c9ee36ec9ce056457eadaffe7ebd77b3b341474be0d1928da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b604d205ab21125c9ee36ec9ce056457eadaffe7ebd77b3b341474be0d1928da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b604d205ab21125c9ee36ec9ce056457eadaffe7ebd77b3b341474be0d1928da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b604d205ab21125c9ee36ec9ce056457eadaffe7ebd77b3b341474be0d1928da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b604d205ab21125c9ee36ec9ce056457eadaffe7ebd77b3b341474be0d1928da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:47:57 compute-0 podman[273305]: 2025-11-25 20:47:57.976485247 +0000 UTC m=+0.195847361 container init db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:47:57 compute-0 podman[273305]: 2025-11-25 20:47:57.987138326 +0000 UTC m=+0.206500370 container start db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:47:58 compute-0 podman[273305]: 2025-11-25 20:47:58.001039125 +0000 UTC m=+0.220401199 container attach db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:47:58 compute-0 nova_compute[248866]: 2025-11-25 20:47:58.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:47:58 compute-0 nova_compute[248866]: 2025-11-25 20:47:58.045 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:47:58 compute-0 nova_compute[248866]: 2025-11-25 20:47:58.046 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:47:58 compute-0 nova_compute[248866]: 2025-11-25 20:47:58.065 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:47:59 compute-0 naughty_noether[273322]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:47:59 compute-0 naughty_noether[273322]: --> relative data size: 1.0
Nov 25 20:47:59 compute-0 naughty_noether[273322]: --> All data devices are unavailable
Nov 25 20:47:59 compute-0 systemd[1]: libpod-db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d.scope: Deactivated successfully.
Nov 25 20:47:59 compute-0 systemd[1]: libpod-db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d.scope: Consumed 1.185s CPU time.
Nov 25 20:47:59 compute-0 podman[273305]: 2025-11-25 20:47:59.211385208 +0000 UTC m=+1.430747282 container died db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b604d205ab21125c9ee36ec9ce056457eadaffe7ebd77b3b341474be0d1928da-merged.mount: Deactivated successfully.
Nov 25 20:47:59 compute-0 podman[273305]: 2025-11-25 20:47:59.283239914 +0000 UTC m=+1.502601968 container remove db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:47:59 compute-0 systemd[1]: libpod-conmon-db80500b553d01f3ba7715b4484b4688ef0fd902ee2fdcad5c03316f9959db0d.scope: Deactivated successfully.
Nov 25 20:47:59 compute-0 sudo[273196]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:59 compute-0 sudo[273364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:59 compute-0 sudo[273364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:59 compute-0 sudo[273364]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:59 compute-0 sudo[273389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:47:59 compute-0 sudo[273389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:59 compute-0 sudo[273389]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:59 compute-0 sudo[273414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:47:59 compute-0 sudo[273414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:59 compute-0 sudo[273414]: pam_unix(sudo:session): session closed for user root
Nov 25 20:47:59 compute-0 ceph-mon[75144]: pgmap v1290: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:47:59 compute-0 sudo[273439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:47:59 compute-0 sudo[273439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:47:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1291: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.195935125 +0000 UTC m=+0.059053478 container create 32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:48:00 compute-0 systemd[1]: Started libpod-conmon-32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db.scope.
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.170096292 +0000 UTC m=+0.033214655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:48:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.30265332 +0000 UTC m=+0.165771663 container init 32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mahavira, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.314512593 +0000 UTC m=+0.177630936 container start 32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.318603504 +0000 UTC m=+0.181721897 container attach 32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:48:00 compute-0 eager_mahavira[273521]: 167 167
Nov 25 20:48:00 compute-0 systemd[1]: libpod-32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db.scope: Deactivated successfully.
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.322098059 +0000 UTC m=+0.185216412 container died 32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb073f83171311f8ea996e98a008ea8cbfe8f8a14f7572997ded712c17c5cdc-merged.mount: Deactivated successfully.
Nov 25 20:48:00 compute-0 podman[273505]: 2025-11-25 20:48:00.372016738 +0000 UTC m=+0.235135091 container remove 32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:48:00 compute-0 systemd[1]: libpod-conmon-32960033ced1ad48a16e830e438f3a72b6bc33ae055e87fff53efcfb92afe8db.scope: Deactivated successfully.
Nov 25 20:48:00 compute-0 podman[273544]: 2025-11-25 20:48:00.633139755 +0000 UTC m=+0.072306909 container create e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:48:00 compute-0 systemd[1]: Started libpod-conmon-e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c.scope.
Nov 25 20:48:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:00 compute-0 podman[273544]: 2025-11-25 20:48:00.605513163 +0000 UTC m=+0.044680367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:48:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52666237ab2594673cadbb2b7cc3dea5a82a12e5dc5a0f1633c4f8bfcf6f5c78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52666237ab2594673cadbb2b7cc3dea5a82a12e5dc5a0f1633c4f8bfcf6f5c78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52666237ab2594673cadbb2b7cc3dea5a82a12e5dc5a0f1633c4f8bfcf6f5c78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52666237ab2594673cadbb2b7cc3dea5a82a12e5dc5a0f1633c4f8bfcf6f5c78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:00 compute-0 podman[273544]: 2025-11-25 20:48:00.761385116 +0000 UTC m=+0.200552330 container init e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:48:00 compute-0 podman[273544]: 2025-11-25 20:48:00.775409638 +0000 UTC m=+0.214576752 container start e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:48:00 compute-0 podman[273544]: 2025-11-25 20:48:00.780867416 +0000 UTC m=+0.220034570 container attach e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]: {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:     "0": [
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:         {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "devices": [
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "/dev/loop3"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             ],
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_name": "ceph_lv0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_size": "21470642176",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "name": "ceph_lv0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "tags": {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cluster_name": "ceph",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.crush_device_class": "",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.encrypted": "0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osd_id": "0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.type": "block",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.vdo": "0"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             },
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "type": "block",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "vg_name": "ceph_vg0"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:         }
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:     ],
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:     "1": [
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:         {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "devices": [
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "/dev/loop4"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             ],
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_name": "ceph_lv1",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_size": "21470642176",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "name": "ceph_lv1",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "tags": {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cluster_name": "ceph",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.crush_device_class": "",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.encrypted": "0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osd_id": "1",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.type": "block",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.vdo": "0"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             },
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "type": "block",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "vg_name": "ceph_vg1"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:         }
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:     ],
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:     "2": [
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:         {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "devices": [
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "/dev/loop5"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             ],
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_name": "ceph_lv2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_size": "21470642176",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "name": "ceph_lv2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "tags": {
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.cluster_name": "ceph",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.crush_device_class": "",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.encrypted": "0",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osd_id": "2",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.type": "block",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:                 "ceph.vdo": "0"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             },
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "type": "block",
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:             "vg_name": "ceph_vg2"
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:         }
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]:     ]
Nov 25 20:48:01 compute-0 eloquent_robinson[273560]: }
Nov 25 20:48:01 compute-0 systemd[1]: libpod-e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c.scope: Deactivated successfully.
Nov 25 20:48:01 compute-0 podman[273544]: 2025-11-25 20:48:01.575706749 +0000 UTC m=+1.014873893 container died e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-52666237ab2594673cadbb2b7cc3dea5a82a12e5dc5a0f1633c4f8bfcf6f5c78-merged.mount: Deactivated successfully.
Nov 25 20:48:01 compute-0 podman[273544]: 2025-11-25 20:48:01.647910525 +0000 UTC m=+1.087077639 container remove e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:48:01 compute-0 systemd[1]: libpod-conmon-e7ac91f298240a5996950fc9043238cc559d92970c5f96ad203c9e9e1421fc4c.scope: Deactivated successfully.
Nov 25 20:48:01 compute-0 sudo[273439]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:01 compute-0 ceph-mon[75144]: pgmap v1291: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:01 compute-0 sudo[273583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:48:01 compute-0 sudo[273583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:48:01 compute-0 sudo[273583]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1292: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:01 compute-0 sudo[273608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:48:01 compute-0 sudo[273608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:48:01 compute-0 sudo[273608]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:01 compute-0 sudo[273633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:48:01 compute-0 sudo[273633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:48:01 compute-0 sudo[273633]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:02 compute-0 sudo[273658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:48:02 compute-0 sudo[273658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:48:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:48:02 compute-0 podman[273723]: 2025-11-25 20:48:02.579377118 +0000 UTC m=+0.088243204 container create 0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:48:02 compute-0 podman[273723]: 2025-11-25 20:48:02.519818516 +0000 UTC m=+0.028684702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:48:02 compute-0 systemd[1]: Started libpod-conmon-0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90.scope.
Nov 25 20:48:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:48:02 compute-0 podman[273723]: 2025-11-25 20:48:02.703368532 +0000 UTC m=+0.212234658 container init 0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_einstein, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:48:02 compute-0 podman[273723]: 2025-11-25 20:48:02.714592178 +0000 UTC m=+0.223458264 container start 0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_einstein, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:48:02 compute-0 systemd[1]: libpod-0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90.scope: Deactivated successfully.
Nov 25 20:48:02 compute-0 hopeful_einstein[273740]: 167 167
Nov 25 20:48:02 compute-0 conmon[273740]: conmon 0f341b49aeba674aff77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90.scope/container/memory.events
Nov 25 20:48:02 compute-0 podman[273723]: 2025-11-25 20:48:02.760596669 +0000 UTC m=+0.269462835 container attach 0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_einstein, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:48:02 compute-0 podman[273723]: 2025-11-25 20:48:02.76135126 +0000 UTC m=+0.270217386 container died 0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fee425eb86e9db2699fe886b5d2c1c6353f3f8cb62e56fd85d062ea44519e1c-merged.mount: Deactivated successfully.
Nov 25 20:48:03 compute-0 nova_compute[248866]: 2025-11-25 20:48:03.060 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:03 compute-0 podman[273723]: 2025-11-25 20:48:03.160439802 +0000 UTC m=+0.669305938 container remove 0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:48:03 compute-0 systemd[1]: libpod-conmon-0f341b49aeba674aff77b3a09463f0a4c554bc7609217bd8719ce2f3fe604f90.scope: Deactivated successfully.
Nov 25 20:48:03 compute-0 podman[273764]: 2025-11-25 20:48:03.423310817 +0000 UTC m=+0.074485548 container create 8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:48:03 compute-0 systemd[1]: Started libpod-conmon-8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525.scope.
Nov 25 20:48:03 compute-0 podman[273764]: 2025-11-25 20:48:03.393015923 +0000 UTC m=+0.044190704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:48:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c74ccb5f93e27f55fe7a9520a15dc38b0552e4b26b218be95a308101454ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c74ccb5f93e27f55fe7a9520a15dc38b0552e4b26b218be95a308101454ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c74ccb5f93e27f55fe7a9520a15dc38b0552e4b26b218be95a308101454ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c74ccb5f93e27f55fe7a9520a15dc38b0552e4b26b218be95a308101454ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:48:03 compute-0 podman[273764]: 2025-11-25 20:48:03.534472233 +0000 UTC m=+0.185647004 container init 8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:48:03 compute-0 podman[273764]: 2025-11-25 20:48:03.545055041 +0000 UTC m=+0.196229772 container start 8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 20:48:03 compute-0 podman[273764]: 2025-11-25 20:48:03.549944914 +0000 UTC m=+0.201119705 container attach 8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 20:48:03 compute-0 ceph-mon[75144]: pgmap v1292: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1293: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:04 compute-0 nifty_shannon[273780]: {
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "osd_id": 2,
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "type": "bluestore"
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:     },
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "osd_id": 1,
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "type": "bluestore"
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:     },
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "osd_id": 0,
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:         "type": "bluestore"
Nov 25 20:48:04 compute-0 nifty_shannon[273780]:     }
Nov 25 20:48:04 compute-0 nifty_shannon[273780]: }
Nov 25 20:48:04 compute-0 systemd[1]: libpod-8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525.scope: Deactivated successfully.
Nov 25 20:48:04 compute-0 systemd[1]: libpod-8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525.scope: Consumed 1.216s CPU time.
Nov 25 20:48:04 compute-0 conmon[273780]: conmon 8817daefc3b9a01ef913 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525.scope/container/memory.events
Nov 25 20:48:04 compute-0 podman[273764]: 2025-11-25 20:48:04.756025121 +0000 UTC m=+1.407199882 container died 8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c74ccb5f93e27f55fe7a9520a15dc38b0552e4b26b218be95a308101454ca8-merged.mount: Deactivated successfully.
Nov 25 20:48:04 compute-0 podman[273764]: 2025-11-25 20:48:04.831225598 +0000 UTC m=+1.482400329 container remove 8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:48:04 compute-0 systemd[1]: libpod-conmon-8817daefc3b9a01ef913b421c2b72795b8fdc551fd9c96e0654845b2c0320525.scope: Deactivated successfully.
Nov 25 20:48:04 compute-0 sudo[273658]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:48:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:48:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:48:04 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:48:04 compute-0 sudo[273828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:48:04 compute-0 sudo[273828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:48:04 compute-0 sudo[273828]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:05 compute-0 sudo[273853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:48:05 compute-0 sudo[273853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:48:05 compute-0 sudo[273853]: pam_unix(sudo:session): session closed for user root
Nov 25 20:48:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:05 compute-0 ceph-mon[75144]: pgmap v1293: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:48:05 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:48:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1294: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:07 compute-0 ceph-mon[75144]: pgmap v1294: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1295: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:08 compute-0 podman[273878]: 2025-11-25 20:48:08.004442086 +0000 UTC m=+0.097881216 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:48:09 compute-0 ceph-mon[75144]: pgmap v1295: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1296: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:10 compute-0 podman[273897]: 2025-11-25 20:48:10.032172216 +0000 UTC m=+0.090478593 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 20:48:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:11 compute-0 ceph-mon[75144]: pgmap v1296: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1297: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:12 compute-0 ceph-mon[75144]: pgmap v1297: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1298: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:14 compute-0 ceph-mon[75144]: pgmap v1298: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.700621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103695700704, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1922, "num_deletes": 250, "total_data_size": 2146928, "memory_usage": 2190136, "flush_reason": "Manual Compaction"}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103695711380, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 1246891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24691, "largest_seqno": 26612, "table_properties": {"data_size": 1240670, "index_size": 3168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16019, "raw_average_key_size": 20, "raw_value_size": 1226889, "raw_average_value_size": 1583, "num_data_blocks": 146, "num_entries": 775, "num_filter_entries": 775, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103479, "oldest_key_time": 1764103479, "file_creation_time": 1764103695, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 10796 microseconds, and 5221 cpu microseconds.
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.711431) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 1246891 bytes OK
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.711459) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.713388) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.713402) EVENT_LOG_v1 {"time_micros": 1764103695713398, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.713429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2138840, prev total WAL file size 2138840, number of live WAL files 2.
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.714231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(1217KB)], [59(5335KB)]
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103695714291, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 6710931, "oldest_snapshot_seqno": -1}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 4420 keys, 5473922 bytes, temperature: kUnknown
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103695756921, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 5473922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5445686, "index_size": 16095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106718, "raw_average_key_size": 24, "raw_value_size": 5367641, "raw_average_value_size": 1214, "num_data_blocks": 689, "num_entries": 4420, "num_filter_entries": 4420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103695, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.757250) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 5473922 bytes
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.759034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.1 rd, 128.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 5.2 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(9.8) write-amplify(4.4) OK, records in: 4830, records dropped: 410 output_compression: NoCompression
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.759063) EVENT_LOG_v1 {"time_micros": 1764103695759048, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42731, "compaction_time_cpu_micros": 25945, "output_level": 6, "num_output_files": 1, "total_output_size": 5473922, "num_input_records": 4830, "num_output_records": 4420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103695759563, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103695761599, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.714106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.761645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.761651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.761654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.761657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:15 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:15.761660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1299: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:48:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1108072698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:48:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:48:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1108072698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:48:17 compute-0 podman[273918]: 2025-11-25 20:48:17.067505681 +0000 UTC m=+0.161903107 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 20:48:17 compute-0 ceph-mon[75144]: pgmap v1299: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1108072698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:48:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/1108072698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:48:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1300: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:19 compute-0 ceph-mon[75144]: pgmap v1300: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1301: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:21 compute-0 ceph-mon[75144]: pgmap v1301: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1302: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:23 compute-0 ceph-mon[75144]: pgmap v1302: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1303: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:25 compute-0 ceph-mon[75144]: pgmap v1303: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1304: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:48:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:48:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:48:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:48:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:48:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:48:27 compute-0 ceph-mon[75144]: pgmap v1304: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1305: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:29 compute-0 ceph-mon[75144]: pgmap v1305: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1306: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:31 compute-0 ceph-mon[75144]: pgmap v1306: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.795069) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103711795098, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 370, "num_deletes": 251, "total_data_size": 159348, "memory_usage": 167368, "flush_reason": "Manual Compaction"}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103711798589, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 157368, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26613, "largest_seqno": 26982, "table_properties": {"data_size": 155103, "index_size": 429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5589, "raw_average_key_size": 18, "raw_value_size": 150657, "raw_average_value_size": 500, "num_data_blocks": 19, "num_entries": 301, "num_filter_entries": 301, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103696, "oldest_key_time": 1764103696, "file_creation_time": 1764103711, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3575 microseconds, and 1538 cpu microseconds.
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.798640) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 157368 bytes OK
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.798661) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.801236) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.801260) EVENT_LOG_v1 {"time_micros": 1764103711801253, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.801279) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 156915, prev total WAL file size 156915, number of live WAL files 2.
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.801717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(153KB)], [62(5345KB)]
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103711801755, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 5631290, "oldest_snapshot_seqno": -1}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 4212 keys, 4462325 bytes, temperature: kUnknown
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103711841187, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 4462325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4436864, "index_size": 13832, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 103088, "raw_average_key_size": 24, "raw_value_size": 4363739, "raw_average_value_size": 1036, "num_data_blocks": 584, "num_entries": 4212, "num_filter_entries": 4212, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103711, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.841514) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 4462325 bytes
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.842977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.5 rd, 112.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 5.2 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(64.1) write-amplify(28.4) OK, records in: 4721, records dropped: 509 output_compression: NoCompression
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.843007) EVENT_LOG_v1 {"time_micros": 1764103711842994, "job": 34, "event": "compaction_finished", "compaction_time_micros": 39513, "compaction_time_cpu_micros": 23267, "output_level": 6, "num_output_files": 1, "total_output_size": 4462325, "num_input_records": 4721, "num_output_records": 4212, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103711843198, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103711844975, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.801673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.845017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.845023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.845026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.845029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:31 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:48:31.845032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:48:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1307: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:33 compute-0 ceph-mon[75144]: pgmap v1307: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1308: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:35 compute-0 ceph-mon[75144]: pgmap v1308: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1309: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:37 compute-0 ceph-mon[75144]: pgmap v1309: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1310: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:38 compute-0 ceph-mon[75144]: pgmap v1310: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:38 compute-0 podman[273945]: 2025-11-25 20:48:38.979169471 +0000 UTC m=+0.073017008 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:48:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1311: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:40 compute-0 ceph-mon[75144]: pgmap v1311: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:41 compute-0 podman[273965]: 2025-11-25 20:48:41.003185561 +0000 UTC m=+0.090989048 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 20:48:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1312: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:42 compute-0 ceph-mon[75144]: pgmap v1312: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1313: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:44 compute-0 ceph-mon[75144]: pgmap v1313: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1314: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:46 compute-0 ceph-mon[75144]: pgmap v1314: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:47 compute-0 nova_compute[248866]: 2025-11-25 20:48:47.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1315: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:48 compute-0 nova_compute[248866]: 2025-11-25 20:48:48.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:48 compute-0 podman[273985]: 2025-11-25 20:48:48.045417835 +0000 UTC m=+0.134653857 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 20:48:48 compute-0 ceph-mon[75144]: pgmap v1315: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:48:48.966 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:48:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:48:48.967 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:48:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:48:48.967 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:48:49 compute-0 nova_compute[248866]: 2025-11-25 20:48:49.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1316: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:50 compute-0 ceph-mon[75144]: pgmap v1316: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1317: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:52 compute-0 nova_compute[248866]: 2025-11-25 20:48:52.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:52 compute-0 nova_compute[248866]: 2025-11-25 20:48:52.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:52 compute-0 nova_compute[248866]: 2025-11-25 20:48:52.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:48:52 compute-0 ceph-mon[75144]: pgmap v1317: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1318: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:54 compute-0 nova_compute[248866]: 2025-11-25 20:48:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:54 compute-0 ceph-mon[75144]: pgmap v1318: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:48:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1319: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.081 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.082 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.082 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.083 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.083 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:48:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:48:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/188812416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.515 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.772 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.774 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5303MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.775 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.775 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:48:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:48:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:48:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:48:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:48:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:48:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.874 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.874 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:48:56 compute-0 nova_compute[248866]: 2025-11-25 20:48:56.915 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:48:56 compute-0 ceph-mon[75144]: pgmap v1319: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:56 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/188812416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:48:57
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'images']
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:48:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:48:57 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468945267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:48:57 compute-0 nova_compute[248866]: 2025-11-25 20:48:57.444 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:48:57 compute-0 nova_compute[248866]: 2025-11-25 20:48:57.451 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:48:57 compute-0 nova_compute[248866]: 2025-11-25 20:48:57.472 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:48:57 compute-0 nova_compute[248866]: 2025-11-25 20:48:57.474 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:48:57 compute-0 nova_compute[248866]: 2025-11-25 20:48:57.474 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:48:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1320: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:57 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2468945267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:48:58 compute-0 nova_compute[248866]: 2025-11-25 20:48:58.475 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:58 compute-0 nova_compute[248866]: 2025-11-25 20:48:58.476 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:48:58 compute-0 nova_compute[248866]: 2025-11-25 20:48:58.476 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:48:58 compute-0 nova_compute[248866]: 2025-11-25 20:48:58.494 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:48:58 compute-0 nova_compute[248866]: 2025-11-25 20:48:58.495 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:48:58 compute-0 ceph-mon[75144]: pgmap v1320: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:48:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1321: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:00 compute-0 ceph-mon[75144]: pgmap v1321: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1322: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:49:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:49:02 compute-0 ceph-mon[75144]: pgmap v1322: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1323: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:05 compute-0 ceph-mon[75144]: pgmap v1323: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:05 compute-0 sudo[274055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:05 compute-0 sudo[274055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:05 compute-0 sudo[274055]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:05 compute-0 sudo[274080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:49:05 compute-0 sudo[274080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:05 compute-0 sudo[274080]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:05 compute-0 sudo[274105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:05 compute-0 sudo[274105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:05 compute-0 sudo[274105]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:05 compute-0 sudo[274130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:49:05 compute-0 sudo[274130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1324: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:06 compute-0 sudo[274130]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev cce25664-6267-4bf1-aacd-97e1df680d18 does not exist
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d70ede14-26d1-44a1-bf68-4adc9e8b097f does not exist
Nov 25 20:49:06 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev bf4cb8cc-5781-4300-adbf-f874eac5008f does not exist
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:49:06 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:49:06 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:49:06 compute-0 sudo[274185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:06 compute-0 sudo[274185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:06 compute-0 sudo[274185]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:06 compute-0 sudo[274210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:49:06 compute-0 sudo[274210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:06 compute-0 sudo[274210]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:06 compute-0 sudo[274235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:06 compute-0 sudo[274235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:06 compute-0 sudo[274235]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:06 compute-0 sudo[274260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:49:06 compute-0 sudo[274260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:07 compute-0 ceph-mon[75144]: pgmap v1324: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: Adjusting osd_memory_target on compute-0 to 43691k
Nov 25 20:49:07 compute-0 ceph-mon[75144]: Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:49:07 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:07.024902966 +0000 UTC m=+0.069143563 container create 25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:49:07 compute-0 systemd[1]: Started libpod-conmon-25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f.scope.
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:06.993514761 +0000 UTC m=+0.037755418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:49:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:07.117302681 +0000 UTC m=+0.161543328 container init 25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:07.126291505 +0000 UTC m=+0.170532102 container start 25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:07.130065048 +0000 UTC m=+0.174305705 container attach 25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:49:07 compute-0 quizzical_aryabhata[274340]: 167 167
Nov 25 20:49:07 compute-0 systemd[1]: libpod-25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f.scope: Deactivated successfully.
Nov 25 20:49:07 compute-0 conmon[274340]: conmon 25f0a9be75efce79725a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f.scope/container/memory.events
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:07.135462985 +0000 UTC m=+0.179703582 container died 25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-936e7b0d35fb57b1e17e7de430da7a61bd235727307417aada3dc7d230605050-merged.mount: Deactivated successfully.
Nov 25 20:49:07 compute-0 podman[274324]: 2025-11-25 20:49:07.191274674 +0000 UTC m=+0.235515281 container remove 25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:49:07 compute-0 systemd[1]: libpod-conmon-25f0a9be75efce79725afcb7cf9ea42f443a647277e57e1ec9b42c3f30e5540f.scope: Deactivated successfully.
Nov 25 20:49:07 compute-0 podman[274363]: 2025-11-25 20:49:07.421300365 +0000 UTC m=+0.051613996 container create eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 25 20:49:07 compute-0 systemd[1]: Started libpod-conmon-eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80.scope.
Nov 25 20:49:07 compute-0 podman[274363]: 2025-11-25 20:49:07.394167076 +0000 UTC m=+0.024480757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:49:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9dd6a6511d8d1c930a62fc89fa5f3b8df9f103e59e6f1dace1795c13d180b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9dd6a6511d8d1c930a62fc89fa5f3b8df9f103e59e6f1dace1795c13d180b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9dd6a6511d8d1c930a62fc89fa5f3b8df9f103e59e6f1dace1795c13d180b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9dd6a6511d8d1c930a62fc89fa5f3b8df9f103e59e6f1dace1795c13d180b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9dd6a6511d8d1c930a62fc89fa5f3b8df9f103e59e6f1dace1795c13d180b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:07 compute-0 podman[274363]: 2025-11-25 20:49:07.542680228 +0000 UTC m=+0.172993879 container init eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 20:49:07 compute-0 podman[274363]: 2025-11-25 20:49:07.562319032 +0000 UTC m=+0.192632653 container start eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:49:07 compute-0 podman[274363]: 2025-11-25 20:49:07.566078035 +0000 UTC m=+0.196391666 container attach eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:49:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1325: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:08 compute-0 serene_bassi[274379]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:49:08 compute-0 serene_bassi[274379]: --> relative data size: 1.0
Nov 25 20:49:08 compute-0 serene_bassi[274379]: --> All data devices are unavailable
Nov 25 20:49:08 compute-0 systemd[1]: libpod-eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80.scope: Deactivated successfully.
Nov 25 20:49:08 compute-0 systemd[1]: libpod-eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80.scope: Consumed 1.185s CPU time.
Nov 25 20:49:08 compute-0 podman[274363]: 2025-11-25 20:49:08.792335091 +0000 UTC m=+1.422648732 container died eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:49:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a9dd6a6511d8d1c930a62fc89fa5f3b8df9f103e59e6f1dace1795c13d180b3-merged.mount: Deactivated successfully.
Nov 25 20:49:08 compute-0 podman[274363]: 2025-11-25 20:49:08.884888821 +0000 UTC m=+1.515202462 container remove eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:49:08 compute-0 systemd[1]: libpod-conmon-eeb08117aba3fa09e28c717878c933194ba7fb1248a6407fa65b5816caf95c80.scope: Deactivated successfully.
Nov 25 20:49:08 compute-0 sudo[274260]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:09 compute-0 ceph-mon[75144]: pgmap v1325: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:09 compute-0 sudo[274420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:09 compute-0 sudo[274420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:09 compute-0 sudo[274420]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:09 compute-0 sudo[274450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:49:09 compute-0 podman[274444]: 2025-11-25 20:49:09.156347388 +0000 UTC m=+0.082302510 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:49:09 compute-0 sudo[274450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:09 compute-0 sudo[274450]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:09 compute-0 sudo[274490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:09 compute-0 sudo[274490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:09 compute-0 sudo[274490]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:09 compute-0 sudo[274516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:49:09 compute-0 sudo[274516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.780896527 +0000 UTC m=+0.067766245 container create 242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:49:09 compute-0 systemd[1]: Started libpod-conmon-242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f.scope.
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.754470799 +0000 UTC m=+0.041340537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:49:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:49:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1326: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.887675394 +0000 UTC m=+0.174545112 container init 242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.89671479 +0000 UTC m=+0.183584478 container start 242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.90038716 +0000 UTC m=+0.187256868 container attach 242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 20:49:09 compute-0 musing_swirles[274599]: 167 167
Nov 25 20:49:09 compute-0 systemd[1]: libpod-242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f.scope: Deactivated successfully.
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.906729402 +0000 UTC m=+0.193599100 container died 242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 20:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-69c3084596d9ae4d9bae511ffbd54d132cbd16c21711e10286e500c4f9a25a5e-merged.mount: Deactivated successfully.
Nov 25 20:49:09 compute-0 podman[274582]: 2025-11-25 20:49:09.959602372 +0000 UTC m=+0.246472100 container remove 242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:49:09 compute-0 systemd[1]: libpod-conmon-242348e286929c6e62909277dc2b083beebd02141959276a26e7423202c9691f.scope: Deactivated successfully.
Nov 25 20:49:10 compute-0 podman[274622]: 2025-11-25 20:49:10.227235046 +0000 UTC m=+0.071527028 container create 4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:49:10 compute-0 systemd[1]: Started libpod-conmon-4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9.scope.
Nov 25 20:49:10 compute-0 podman[274622]: 2025-11-25 20:49:10.199509862 +0000 UTC m=+0.043801924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:49:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eff6262424490865ae2b4c278aa75b952dd45174aa2c34e7ac9ee21dedec497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eff6262424490865ae2b4c278aa75b952dd45174aa2c34e7ac9ee21dedec497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eff6262424490865ae2b4c278aa75b952dd45174aa2c34e7ac9ee21dedec497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eff6262424490865ae2b4c278aa75b952dd45174aa2c34e7ac9ee21dedec497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:10 compute-0 podman[274622]: 2025-11-25 20:49:10.329789677 +0000 UTC m=+0.174081729 container init 4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:49:10 compute-0 podman[274622]: 2025-11-25 20:49:10.348090966 +0000 UTC m=+0.192382948 container start 4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:49:10 compute-0 podman[274622]: 2025-11-25 20:49:10.352023592 +0000 UTC m=+0.196315604 container attach 4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:49:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:11 compute-0 ceph-mon[75144]: pgmap v1326: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:11 compute-0 hopeful_black[274638]: {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:     "0": [
Nov 25 20:49:11 compute-0 hopeful_black[274638]:         {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "devices": [
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "/dev/loop3"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             ],
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_name": "ceph_lv0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_size": "21470642176",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "name": "ceph_lv0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "tags": {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cluster_name": "ceph",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.crush_device_class": "",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.encrypted": "0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osd_id": "0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.type": "block",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.vdo": "0"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             },
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "type": "block",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "vg_name": "ceph_vg0"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:         }
Nov 25 20:49:11 compute-0 hopeful_black[274638]:     ],
Nov 25 20:49:11 compute-0 hopeful_black[274638]:     "1": [
Nov 25 20:49:11 compute-0 hopeful_black[274638]:         {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "devices": [
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "/dev/loop4"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             ],
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_name": "ceph_lv1",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_size": "21470642176",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "name": "ceph_lv1",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "tags": {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cluster_name": "ceph",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.crush_device_class": "",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.encrypted": "0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osd_id": "1",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.type": "block",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.vdo": "0"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             },
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "type": "block",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "vg_name": "ceph_vg1"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:         }
Nov 25 20:49:11 compute-0 hopeful_black[274638]:     ],
Nov 25 20:49:11 compute-0 hopeful_black[274638]:     "2": [
Nov 25 20:49:11 compute-0 hopeful_black[274638]:         {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "devices": [
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "/dev/loop5"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             ],
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_name": "ceph_lv2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_size": "21470642176",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "name": "ceph_lv2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "tags": {
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.cluster_name": "ceph",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.crush_device_class": "",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.encrypted": "0",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osd_id": "2",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.type": "block",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:                 "ceph.vdo": "0"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             },
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "type": "block",
Nov 25 20:49:11 compute-0 hopeful_black[274638]:             "vg_name": "ceph_vg2"
Nov 25 20:49:11 compute-0 hopeful_black[274638]:         }
Nov 25 20:49:11 compute-0 hopeful_black[274638]:     ]
Nov 25 20:49:11 compute-0 hopeful_black[274638]: }
Nov 25 20:49:11 compute-0 systemd[1]: libpod-4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9.scope: Deactivated successfully.
Nov 25 20:49:11 compute-0 podman[274647]: 2025-11-25 20:49:11.150783453 +0000 UTC m=+0.040821042 container died 4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eff6262424490865ae2b4c278aa75b952dd45174aa2c34e7ac9ee21dedec497-merged.mount: Deactivated successfully.
Nov 25 20:49:11 compute-0 podman[274647]: 2025-11-25 20:49:11.22743092 +0000 UTC m=+0.117468489 container remove 4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:49:11 compute-0 systemd[1]: libpod-conmon-4effa5412dce671cc68f82110de9da0ee08ae87afebdf1eb4b63cd317694a8b9.scope: Deactivated successfully.
Nov 25 20:49:11 compute-0 podman[274648]: 2025-11-25 20:49:11.237573105 +0000 UTC m=+0.106575152 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 20:49:11 compute-0 sudo[274516]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:11 compute-0 sudo[274682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:11 compute-0 sudo[274682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:11 compute-0 sudo[274682]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:11 compute-0 sudo[274707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:49:11 compute-0 sudo[274707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:11 compute-0 sudo[274707]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:11 compute-0 sudo[274732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:11 compute-0 sudo[274732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:11 compute-0 sudo[274732]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:11 compute-0 sudo[274757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:49:11 compute-0 sudo[274757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1327: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.028254316 +0000 UTC m=+0.048253825 container create 0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:49:12 compute-0 systemd[1]: Started libpod-conmon-0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13.scope.
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.006168515 +0000 UTC m=+0.026168124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:49:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.118891043 +0000 UTC m=+0.138890572 container init 0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.131256629 +0000 UTC m=+0.151256138 container start 0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.135350131 +0000 UTC m=+0.155349660 container attach 0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:49:12 compute-0 admiring_nash[274839]: 167 167
Nov 25 20:49:12 compute-0 systemd[1]: libpod-0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13.scope: Deactivated successfully.
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.140265065 +0000 UTC m=+0.160264584 container died 0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:49:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-98508f91980818f7dfc9906c839de4ee90a85db34cc265381d1b00736596fc4b-merged.mount: Deactivated successfully.
Nov 25 20:49:12 compute-0 podman[274823]: 2025-11-25 20:49:12.187907222 +0000 UTC m=+0.207906731 container remove 0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:49:12 compute-0 systemd[1]: libpod-conmon-0aa1a6711e83745d9d4345fc05c78f4efb613c8797620735e07ce567d0f9ba13.scope: Deactivated successfully.
Nov 25 20:49:12 compute-0 podman[274863]: 2025-11-25 20:49:12.409822212 +0000 UTC m=+0.067922210 container create d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 20:49:12 compute-0 systemd[1]: Started libpod-conmon-d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969.scope.
Nov 25 20:49:12 compute-0 podman[274863]: 2025-11-25 20:49:12.381641245 +0000 UTC m=+0.039741283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:49:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226d456aa1d632a0c6f0e5bfac33652337643df5628034f66a0b07a9a88c22a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226d456aa1d632a0c6f0e5bfac33652337643df5628034f66a0b07a9a88c22a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226d456aa1d632a0c6f0e5bfac33652337643df5628034f66a0b07a9a88c22a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226d456aa1d632a0c6f0e5bfac33652337643df5628034f66a0b07a9a88c22a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:49:12 compute-0 podman[274863]: 2025-11-25 20:49:12.499061171 +0000 UTC m=+0.157161149 container init d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:49:12 compute-0 podman[274863]: 2025-11-25 20:49:12.514565562 +0000 UTC m=+0.172665520 container start d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 20:49:12 compute-0 podman[274863]: 2025-11-25 20:49:12.518284694 +0000 UTC m=+0.176384772 container attach d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:49:13 compute-0 ceph-mon[75144]: pgmap v1327: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]: {
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "osd_id": 2,
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "type": "bluestore"
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:     },
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "osd_id": 1,
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "type": "bluestore"
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:     },
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "osd_id": 0,
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:         "type": "bluestore"
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]:     }
Nov 25 20:49:13 compute-0 interesting_wescoff[274880]: }
Nov 25 20:49:13 compute-0 systemd[1]: libpod-d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969.scope: Deactivated successfully.
Nov 25 20:49:13 compute-0 systemd[1]: libpod-d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969.scope: Consumed 1.109s CPU time.
Nov 25 20:49:13 compute-0 podman[274863]: 2025-11-25 20:49:13.616486075 +0000 UTC m=+1.274586113 container died d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:49:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-226d456aa1d632a0c6f0e5bfac33652337643df5628034f66a0b07a9a88c22a4-merged.mount: Deactivated successfully.
Nov 25 20:49:13 compute-0 podman[274863]: 2025-11-25 20:49:13.695003052 +0000 UTC m=+1.353103020 container remove d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:49:13 compute-0 systemd[1]: libpod-conmon-d5a5f8074b221d9b6c89984524109a4002200c18d6efc52a2de2144ce2dae969.scope: Deactivated successfully.
Nov 25 20:49:13 compute-0 sudo[274757]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:49:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:49:13 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:49:13 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:49:13 compute-0 sudo[274927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:49:13 compute-0 sudo[274927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:13 compute-0 sudo[274927]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1328: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:13 compute-0 sudo[274952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:49:13 compute-0 sudo[274952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:49:13 compute-0 sudo[274952]: pam_unix(sudo:session): session closed for user root
Nov 25 20:49:14 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:49:14 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:49:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:15 compute-0 ceph-mon[75144]: pgmap v1328: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1329: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:49:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2629510280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:49:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:49:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2629510280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:49:17 compute-0 ceph-mon[75144]: pgmap v1329: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2629510280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:49:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2629510280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:49:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1330: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:19 compute-0 podman[274977]: 2025-11-25 20:49:19.021818535 +0000 UTC m=+0.112505843 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:49:19 compute-0 ceph-mon[75144]: pgmap v1330: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1331: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:21 compute-0 ceph-mon[75144]: pgmap v1331: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1332: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:23 compute-0 ceph-mon[75144]: pgmap v1332: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1333: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:25 compute-0 ceph-mon[75144]: pgmap v1333: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1334: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:49:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:49:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:49:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:49:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:49:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:49:27 compute-0 ceph-mon[75144]: pgmap v1334: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1335: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:29 compute-0 ceph-mon[75144]: pgmap v1335: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1336: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:30 compute-0 ceph-mon[75144]: pgmap v1336: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1337: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:32 compute-0 ceph-mon[75144]: pgmap v1337: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1338: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:34 compute-0 ceph-mon[75144]: pgmap v1338: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1339: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:36 compute-0 ceph-mon[75144]: pgmap v1339: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1340: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:38 compute-0 ceph-mon[75144]: pgmap v1340: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1341: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:40 compute-0 podman[275004]: 2025-11-25 20:49:40.00020568 +0000 UTC m=+0.086990758 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:49:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:40 compute-0 ceph-mon[75144]: pgmap v1341: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1342: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:42 compute-0 podman[275023]: 2025-11-25 20:49:42.002835598 +0000 UTC m=+0.088480630 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:49:42 compute-0 ceph-mon[75144]: pgmap v1342: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1343: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:44 compute-0 ceph-mon[75144]: pgmap v1343: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1344: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:47 compute-0 ceph-mon[75144]: pgmap v1344: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:47 compute-0 nova_compute[248866]: 2025-11-25 20:49:47.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1345: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:49:48.968 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:49:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:49:48.969 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:49:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:49:48.969 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:49:49 compute-0 ceph-mon[75144]: pgmap v1345: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1346: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:50 compute-0 podman[275042]: 2025-11-25 20:49:50.039025044 +0000 UTC m=+0.126058032 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 20:49:50 compute-0 nova_compute[248866]: 2025-11-25 20:49:50.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:50 compute-0 nova_compute[248866]: 2025-11-25 20:49:50.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:51 compute-0 ceph-mon[75144]: pgmap v1346: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1347: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:53 compute-0 ceph-mon[75144]: pgmap v1347: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1348: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:54 compute-0 nova_compute[248866]: 2025-11-25 20:49:54.039 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:54 compute-0 nova_compute[248866]: 2025-11-25 20:49:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:54 compute-0 nova_compute[248866]: 2025-11-25 20:49:54.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:49:55 compute-0 ceph-mon[75144]: pgmap v1348: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:49:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1349: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.075 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.075 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.075 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.076 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.076 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:49:56 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:49:56 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956010883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.543 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.788 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.791 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5293MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.791 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:49:56 compute-0 nova_compute[248866]: 2025-11-25 20:49:56.792 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:49:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:49:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:49:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:49:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:49:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:49:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:49:57 compute-0 ceph-mon[75144]: pgmap v1349: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:57 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/956010883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:49:57
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images']
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.125 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.126 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.147 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:49:57 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:49:57 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219351565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.650 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.657 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.749 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.751 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:49:57 compute-0 nova_compute[248866]: 2025-11-25 20:49:57.752 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:49:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1350: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:58 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3219351565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:49:58 compute-0 nova_compute[248866]: 2025-11-25 20:49:58.752 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:58 compute-0 nova_compute[248866]: 2025-11-25 20:49:58.753 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:49:58 compute-0 nova_compute[248866]: 2025-11-25 20:49:58.754 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:49:58 compute-0 nova_compute[248866]: 2025-11-25 20:49:58.777 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:49:58 compute-0 nova_compute[248866]: 2025-11-25 20:49:58.778 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:49:59 compute-0 ceph-mon[75144]: pgmap v1350: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:49:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1351: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:01 compute-0 ceph-mon[75144]: pgmap v1351: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1352: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:50:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:50:03 compute-0 ceph-mon[75144]: pgmap v1352: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1353: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:05 compute-0 ceph-mon[75144]: pgmap v1353: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1354: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:07 compute-0 ceph-mon[75144]: pgmap v1354: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1355: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:08 compute-0 nova_compute[248866]: 2025-11-25 20:50:08.063 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:09 compute-0 ceph-mon[75144]: pgmap v1355: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1356: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:10 compute-0 podman[275112]: 2025-11-25 20:50:10.977569639 +0000 UTC m=+0.072334470 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 20:50:11 compute-0 ceph-mon[75144]: pgmap v1356: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1357: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:13 compute-0 podman[275131]: 2025-11-25 20:50:13.00679898 +0000 UTC m=+0.086836224 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 20:50:13 compute-0 ceph-mon[75144]: pgmap v1357: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:13 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1358: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:14 compute-0 sudo[275152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:14 compute-0 sudo[275152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:14 compute-0 sudo[275152]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:14 compute-0 sudo[275177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:50:14 compute-0 sudo[275177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:14 compute-0 sudo[275177]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:14 compute-0 sudo[275202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:14 compute-0 sudo[275202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:14 compute-0 sudo[275202]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:14 compute-0 sudo[275227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:50:14 compute-0 sudo[275227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:14 compute-0 sudo[275227]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:50:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:50:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:50:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:50:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:50:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:50:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 05001482-ea85-480c-9dd8-1a0d05d80467 does not exist
Nov 25 20:50:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev aa80a4eb-82bf-41b3-a3b5-e4654340daf7 does not exist
Nov 25 20:50:14 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev fb18239e-3a8e-459f-9f48-7f72c389145b does not exist
Nov 25 20:50:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:50:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:50:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:50:14 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:50:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:50:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:50:15 compute-0 sudo[275283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:15 compute-0 sudo[275283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:15 compute-0 sudo[275283]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:15 compute-0 ceph-mon[75144]: pgmap v1358: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:50:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:50:15 compute-0 sudo[275308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:50:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:50:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:50:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:50:15 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:50:15 compute-0 sudo[275308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:15 compute-0 sudo[275308]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:15 compute-0 sudo[275333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:15 compute-0 sudo[275333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:15 compute-0 sudo[275333]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:15 compute-0 sudo[275358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:50:15 compute-0 sudo[275358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:15 compute-0 podman[275423]: 2025-11-25 20:50:15.812141305 +0000 UTC m=+0.060133578 container create 7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hawking, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:50:15 compute-0 systemd[1]: Started libpod-conmon-7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c.scope.
Nov 25 20:50:15 compute-0 podman[275423]: 2025-11-25 20:50:15.78255718 +0000 UTC m=+0.030549513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:50:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:50:15 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1359: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:15 compute-0 podman[275423]: 2025-11-25 20:50:15.925276035 +0000 UTC m=+0.173268368 container init 7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:50:15 compute-0 podman[275423]: 2025-11-25 20:50:15.937764564 +0000 UTC m=+0.185756817 container start 7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:50:15 compute-0 podman[275423]: 2025-11-25 20:50:15.943197202 +0000 UTC m=+0.191189525 container attach 7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hawking, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:50:15 compute-0 bold_hawking[275439]: 167 167
Nov 25 20:50:15 compute-0 systemd[1]: libpod-7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c.scope: Deactivated successfully.
Nov 25 20:50:15 compute-0 podman[275423]: 2025-11-25 20:50:15.947139919 +0000 UTC m=+0.195132202 container died 7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 20:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e25b3612812770e2140d9071f67b954b125497e9abe1b5eb3ef7b0a8d2b0dc75-merged.mount: Deactivated successfully.
Nov 25 20:50:16 compute-0 podman[275423]: 2025-11-25 20:50:16.004511591 +0000 UTC m=+0.252503844 container remove 7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:50:16 compute-0 systemd[1]: libpod-conmon-7becdf6386603374dab432eadca5a9066576d9da1da1db8b0673342f6f50841c.scope: Deactivated successfully.
Nov 25 20:50:16 compute-0 podman[275462]: 2025-11-25 20:50:16.220252733 +0000 UTC m=+0.056077367 container create c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 20:50:16 compute-0 systemd[1]: Started libpod-conmon-c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863.scope.
Nov 25 20:50:16 compute-0 podman[275462]: 2025-11-25 20:50:16.193018992 +0000 UTC m=+0.028843696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:50:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f291b149a2bc577858491aebb75f39ac07939481db477632f27e99709c831f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f291b149a2bc577858491aebb75f39ac07939481db477632f27e99709c831f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f291b149a2bc577858491aebb75f39ac07939481db477632f27e99709c831f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f291b149a2bc577858491aebb75f39ac07939481db477632f27e99709c831f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f291b149a2bc577858491aebb75f39ac07939481db477632f27e99709c831f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:16 compute-0 podman[275462]: 2025-11-25 20:50:16.316313277 +0000 UTC m=+0.152137971 container init c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:50:16 compute-0 podman[275462]: 2025-11-25 20:50:16.33074009 +0000 UTC m=+0.166564744 container start c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:50:16 compute-0 podman[275462]: 2025-11-25 20:50:16.335438088 +0000 UTC m=+0.171262712 container attach c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 20:50:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:50:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514131838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:50:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:50:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514131838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:50:17 compute-0 ceph-mon[75144]: pgmap v1359: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/514131838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:50:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/514131838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:50:17 compute-0 bold_carson[275479]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:50:17 compute-0 bold_carson[275479]: --> relative data size: 1.0
Nov 25 20:50:17 compute-0 bold_carson[275479]: --> All data devices are unavailable
Nov 25 20:50:17 compute-0 systemd[1]: libpod-c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863.scope: Deactivated successfully.
Nov 25 20:50:17 compute-0 systemd[1]: libpod-c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863.scope: Consumed 1.035s CPU time.
Nov 25 20:50:17 compute-0 podman[275462]: 2025-11-25 20:50:17.409740829 +0000 UTC m=+1.245565463 container died c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f291b149a2bc577858491aebb75f39ac07939481db477632f27e99709c831f-merged.mount: Deactivated successfully.
Nov 25 20:50:17 compute-0 podman[275462]: 2025-11-25 20:50:17.478285054 +0000 UTC m=+1.314109688 container remove c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 20:50:17 compute-0 systemd[1]: libpod-conmon-c642ddbf7e1b5e9bc1cf06606a12bafb19960627f3f284049c7049d712797863.scope: Deactivated successfully.
Nov 25 20:50:17 compute-0 sudo[275358]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:17 compute-0 sudo[275520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:17 compute-0 sudo[275520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:17 compute-0 sudo[275520]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:17 compute-0 sudo[275545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:50:17 compute-0 sudo[275545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:17 compute-0 sudo[275545]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:17 compute-0 sudo[275570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:17 compute-0 sudo[275570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:17 compute-0 sudo[275570]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:17 compute-0 sudo[275595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:50:17 compute-0 sudo[275595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:17 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1360: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.316682163 +0000 UTC m=+0.048543222 container create 1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:50:18 compute-0 systemd[1]: Started libpod-conmon-1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1.scope.
Nov 25 20:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.299901386 +0000 UTC m=+0.031762435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.40694072 +0000 UTC m=+0.138801759 container init 1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.418731881 +0000 UTC m=+0.150592900 container start 1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.423045978 +0000 UTC m=+0.154906997 container attach 1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:50:18 compute-0 silly_golick[275675]: 167 167
Nov 25 20:50:18 compute-0 systemd[1]: libpod-1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1.scope: Deactivated successfully.
Nov 25 20:50:18 compute-0 conmon[275675]: conmon 1c97a96a302ee7874c58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1.scope/container/memory.events
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.427235132 +0000 UTC m=+0.159096211 container died 1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e5c7fc270131baaa92646d4ddca20a346dd93dd580ed51eff7b64c8b537f6b-merged.mount: Deactivated successfully.
Nov 25 20:50:18 compute-0 podman[275659]: 2025-11-25 20:50:18.476712638 +0000 UTC m=+0.208573697 container remove 1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 20:50:18 compute-0 systemd[1]: libpod-conmon-1c97a96a302ee7874c58cd1ed29128bf5cd92762e27c525fd2bc8274f68f34d1.scope: Deactivated successfully.
Nov 25 20:50:18 compute-0 podman[275699]: 2025-11-25 20:50:18.683535658 +0000 UTC m=+0.063114799 container create e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:50:18 compute-0 systemd[1]: Started libpod-conmon-e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0.scope.
Nov 25 20:50:18 compute-0 podman[275699]: 2025-11-25 20:50:18.651186008 +0000 UTC m=+0.030765199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65692c18f50956379ee6f11ce7f07dcc4671734bb481260df17a11d8b2aa91d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65692c18f50956379ee6f11ce7f07dcc4671734bb481260df17a11d8b2aa91d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65692c18f50956379ee6f11ce7f07dcc4671734bb481260df17a11d8b2aa91d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65692c18f50956379ee6f11ce7f07dcc4671734bb481260df17a11d8b2aa91d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:18 compute-0 podman[275699]: 2025-11-25 20:50:18.778636886 +0000 UTC m=+0.158216027 container init e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:50:18 compute-0 podman[275699]: 2025-11-25 20:50:18.791938709 +0000 UTC m=+0.171517850 container start e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:50:18 compute-0 podman[275699]: 2025-11-25 20:50:18.796716459 +0000 UTC m=+0.176295600 container attach e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:50:19 compute-0 ceph-mon[75144]: pgmap v1360: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:19 compute-0 priceless_neumann[275715]: {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:     "0": [
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:         {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "devices": [
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "/dev/loop3"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             ],
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_name": "ceph_lv0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_size": "21470642176",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "name": "ceph_lv0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "tags": {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cluster_name": "ceph",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.crush_device_class": "",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.encrypted": "0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osd_id": "0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.type": "block",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.vdo": "0"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             },
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "type": "block",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "vg_name": "ceph_vg0"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:         }
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:     ],
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:     "1": [
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:         {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "devices": [
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "/dev/loop4"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             ],
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_name": "ceph_lv1",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_size": "21470642176",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "name": "ceph_lv1",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "tags": {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cluster_name": "ceph",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.crush_device_class": "",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.encrypted": "0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osd_id": "1",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.type": "block",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.vdo": "0"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             },
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "type": "block",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "vg_name": "ceph_vg1"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:         }
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:     ],
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:     "2": [
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:         {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "devices": [
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "/dev/loop5"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             ],
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_name": "ceph_lv2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_size": "21470642176",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "name": "ceph_lv2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "tags": {
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.cluster_name": "ceph",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.crush_device_class": "",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.encrypted": "0",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osd_id": "2",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.type": "block",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:                 "ceph.vdo": "0"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             },
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "type": "block",
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:             "vg_name": "ceph_vg2"
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:         }
Nov 25 20:50:19 compute-0 priceless_neumann[275715]:     ]
Nov 25 20:50:19 compute-0 priceless_neumann[275715]: }
Nov 25 20:50:19 compute-0 systemd[1]: libpod-e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0.scope: Deactivated successfully.
Nov 25 20:50:19 compute-0 podman[275699]: 2025-11-25 20:50:19.602247513 +0000 UTC m=+0.981826614 container died e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-65692c18f50956379ee6f11ce7f07dcc4671734bb481260df17a11d8b2aa91d4-merged.mount: Deactivated successfully.
Nov 25 20:50:19 compute-0 podman[275699]: 2025-11-25 20:50:19.673087472 +0000 UTC m=+1.052666583 container remove e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 20:50:19 compute-0 systemd[1]: libpod-conmon-e5a1c30ebaa936918a208f7b2bd342385b14297ee936954836552a95e10c85e0.scope: Deactivated successfully.
Nov 25 20:50:19 compute-0 sudo[275595]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:19 compute-0 sudo[275738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:19 compute-0 sudo[275738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:19 compute-0 sudo[275738]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:19 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1361: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:19 compute-0 sudo[275763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:50:19 compute-0 sudo[275763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:19 compute-0 sudo[275763]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:20 compute-0 sudo[275788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:20 compute-0 sudo[275788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:20 compute-0 sudo[275788]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:20 compute-0 sudo[275813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:50:20 compute-0 sudo[275813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:20 compute-0 podman[275837]: 2025-11-25 20:50:20.288604734 +0000 UTC m=+0.148505043 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.57428239 +0000 UTC m=+0.066801839 container create 0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:50:20 compute-0 systemd[1]: Started libpod-conmon-0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925.scope.
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.545528218 +0000 UTC m=+0.038047717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.67162845 +0000 UTC m=+0.164147919 container init 0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.683012599 +0000 UTC m=+0.175532048 container start 0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.686792022 +0000 UTC m=+0.179311511 container attach 0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:50:20 compute-0 distracted_haibt[275924]: 167 167
Nov 25 20:50:20 compute-0 systemd[1]: libpod-0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925.scope: Deactivated successfully.
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.69072252 +0000 UTC m=+0.183241959 container died 0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bebb35f52c80e57a577300f4601cf0fe059d72f65e5f711151d38132631baa81-merged.mount: Deactivated successfully.
Nov 25 20:50:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:20 compute-0 podman[275907]: 2025-11-25 20:50:20.73409086 +0000 UTC m=+0.226610269 container remove 0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:50:20 compute-0 systemd[1]: libpod-conmon-0ddac22822790670c291cf2b9b0bbadb72a716425db4ccac0c14dc352b085925.scope: Deactivated successfully.
Nov 25 20:50:20 compute-0 podman[275947]: 2025-11-25 20:50:20.900300314 +0000 UTC m=+0.047155195 container create 4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:50:20 compute-0 systemd[1]: Started libpod-conmon-4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd.scope.
Nov 25 20:50:20 compute-0 podman[275947]: 2025-11-25 20:50:20.875084908 +0000 UTC m=+0.021939859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc99e9819f30fdc8f3df8e8a94c9cddafe6e2b682bc7103fb09dc208aef59ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc99e9819f30fdc8f3df8e8a94c9cddafe6e2b682bc7103fb09dc208aef59ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc99e9819f30fdc8f3df8e8a94c9cddafe6e2b682bc7103fb09dc208aef59ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc99e9819f30fdc8f3df8e8a94c9cddafe6e2b682bc7103fb09dc208aef59ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:50:21 compute-0 podman[275947]: 2025-11-25 20:50:21.00048422 +0000 UTC m=+0.147339101 container init 4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:50:21 compute-0 podman[275947]: 2025-11-25 20:50:21.01480461 +0000 UTC m=+0.161659531 container start 4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:50:21 compute-0 podman[275947]: 2025-11-25 20:50:21.019856287 +0000 UTC m=+0.166711178 container attach 4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 20:50:21 compute-0 ceph-mon[75144]: pgmap v1361: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:21 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1362: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]: {
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "osd_id": 2,
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "type": "bluestore"
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:     },
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "osd_id": 1,
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "type": "bluestore"
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:     },
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "osd_id": 0,
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:         "type": "bluestore"
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]:     }
Nov 25 20:50:22 compute-0 mystifying_bhabha[275964]: }
Nov 25 20:50:22 compute-0 systemd[1]: libpod-4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd.scope: Deactivated successfully.
Nov 25 20:50:22 compute-0 systemd[1]: libpod-4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd.scope: Consumed 1.112s CPU time.
Nov 25 20:50:22 compute-0 podman[275997]: 2025-11-25 20:50:22.173929619 +0000 UTC m=+0.037655176 container died 4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc99e9819f30fdc8f3df8e8a94c9cddafe6e2b682bc7103fb09dc208aef59ab1-merged.mount: Deactivated successfully.
Nov 25 20:50:22 compute-0 podman[275997]: 2025-11-25 20:50:22.257941975 +0000 UTC m=+0.121667522 container remove 4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:50:22 compute-0 systemd[1]: libpod-conmon-4142da2fcdb3e0fa9703451321bf7dbf7a9570af048d4cf3b3f2ea104bb017fd.scope: Deactivated successfully.
Nov 25 20:50:22 compute-0 sudo[275813]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:50:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:50:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:50:22 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:50:22 compute-0 sudo[276012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:50:22 compute-0 sudo[276012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:22 compute-0 sudo[276012]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:22 compute-0 sudo[276037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:50:22 compute-0 sudo[276037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:50:22 compute-0 sudo[276037]: pam_unix(sudo:session): session closed for user root
Nov 25 20:50:23 compute-0 ceph-mon[75144]: pgmap v1362: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:50:23 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:50:23 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1363: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:25 compute-0 ceph-mon[75144]: pgmap v1363: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:25 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1364: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:50:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:50:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:50:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:50:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:50:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:50:27 compute-0 ceph-mon[75144]: pgmap v1364: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:27 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1365: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:29 compute-0 ceph-mon[75144]: pgmap v1365: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:29 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1366: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:31 compute-0 ceph-mon[75144]: pgmap v1366: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:31 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1367: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:33 compute-0 ceph-mon[75144]: pgmap v1367: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:33 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1368: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:35 compute-0 ceph-mon[75144]: pgmap v1368: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:35 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1369: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:37 compute-0 ceph-mon[75144]: pgmap v1369: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:37 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1370: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:39 compute-0 nova_compute[248866]: 2025-11-25 20:50:39.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:39 compute-0 nova_compute[248866]: 2025-11-25 20:50:39.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 20:50:39 compute-0 ceph-mon[75144]: pgmap v1370: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:39 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1371: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:41 compute-0 ceph-mon[75144]: pgmap v1371: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:41 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1372: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:41 compute-0 podman[276062]: 2025-11-25 20:50:41.982764309 +0000 UTC m=+0.077752606 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:50:43 compute-0 ceph-mon[75144]: pgmap v1372: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:43 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1373: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:43 compute-0 podman[276083]: 2025-11-25 20:50:43.991509444 +0000 UTC m=+0.089945539 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd)
Nov 25 20:50:45 compute-0 ceph-mon[75144]: pgmap v1373: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:45 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1374: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:47 compute-0 ceph-mon[75144]: pgmap v1374: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:47 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1375: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:50:48.969 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:50:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:50:48.970 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:50:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:50:48.970 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:50:49 compute-0 nova_compute[248866]: 2025-11-25 20:50:49.066 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:49 compute-0 ceph-mon[75144]: pgmap v1375: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:49 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1376: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:50 compute-0 nova_compute[248866]: 2025-11-25 20:50:50.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:51 compute-0 podman[276104]: 2025-11-25 20:50:51.025100794 +0000 UTC m=+0.115891365 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:50:51 compute-0 nova_compute[248866]: 2025-11-25 20:50:51.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:51 compute-0 ceph-mon[75144]: pgmap v1376: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:51 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1377: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:53 compute-0 ceph-mon[75144]: pgmap v1377: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:53 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1378: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:54 compute-0 nova_compute[248866]: 2025-11-25 20:50:54.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:54 compute-0 nova_compute[248866]: 2025-11-25 20:50:54.041 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:50:55 compute-0 ceph-mon[75144]: pgmap v1378: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:50:55 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1379: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:56 compute-0 nova_compute[248866]: 2025-11-25 20:50:56.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:50:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:50:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:50:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:50:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:50:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:50:57 compute-0 nova_compute[248866]: 2025-11-25 20:50:57.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:50:57
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'vms']
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:50:57 compute-0 ceph-mon[75144]: pgmap v1379: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: client.0 ms_handle_reset on v2:192.168.122.100:6800/446496168
Nov 25 20:50:57 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1380: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.079 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.080 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.080 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.080 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.081 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:50:58 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:50:58 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239308705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.602 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.756 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.757 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5303MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.757 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.757 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.834 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.835 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:50:58 compute-0 nova_compute[248866]: 2025-11-25 20:50:58.862 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:50:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:50:59 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550253824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.327 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.336 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.357 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.359 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.359 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.360 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.360 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 20:50:59 compute-0 nova_compute[248866]: 2025-11-25 20:50:59.376 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 20:50:59 compute-0 ceph-mon[75144]: pgmap v1380: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:50:59 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/239308705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:50:59 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1550253824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:50:59 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1381: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:00 compute-0 nova_compute[248866]: 2025-11-25 20:51:00.378 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:00 compute-0 nova_compute[248866]: 2025-11-25 20:51:00.378 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:51:00 compute-0 nova_compute[248866]: 2025-11-25 20:51:00.379 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:51:00 compute-0 nova_compute[248866]: 2025-11-25 20:51:00.397 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:51:00 compute-0 nova_compute[248866]: 2025-11-25 20:51:00.398 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:01 compute-0 ceph-mon[75144]: pgmap v1381: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:01 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1382: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:51:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:51:03 compute-0 ceph-mon[75144]: pgmap v1382: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:03 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1383: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:05 compute-0 ceph-mon[75144]: pgmap v1383: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:05 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1384: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:07 compute-0 ceph-mon[75144]: pgmap v1384: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:07 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1385: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:09 compute-0 ceph-mon[75144]: pgmap v1385: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:09 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1386: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:10 compute-0 nova_compute[248866]: 2025-11-25 20:51:10.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:11 compute-0 ceph-mon[75144]: pgmap v1386: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:11 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1387: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:12 compute-0 podman[276174]: 2025-11-25 20:51:12.985839272 +0000 UTC m=+0.077551991 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 20:51:13 compute-0 ceph-mon[75144]: pgmap v1387: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1388: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:15 compute-0 podman[276192]: 2025-11-25 20:51:15.007319612 +0000 UTC m=+0.092538579 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:51:15 compute-0 ceph-mon[75144]: pgmap v1388: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1389: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:51:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305348784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:51:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:51:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305348784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:51:17 compute-0 ceph-mon[75144]: pgmap v1389: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3305348784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:51:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3305348784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:51:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1390: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:19 compute-0 ceph-mon[75144]: pgmap v1390: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1391: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:21 compute-0 ceph-mon[75144]: pgmap v1391: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1392: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:22 compute-0 podman[276212]: 2025-11-25 20:51:22.023808698 +0000 UTC m=+0.117346555 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:51:22 compute-0 sudo[276240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:22 compute-0 sudo[276240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:22 compute-0 sudo[276240]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:22 compute-0 sudo[276265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:51:22 compute-0 sudo[276265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:22 compute-0 sudo[276265]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:22 compute-0 sudo[276290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:22 compute-0 sudo[276290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:22 compute-0 sudo[276290]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:22 compute-0 sudo[276315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 20:51:22 compute-0 sudo[276315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:23 compute-0 podman[276414]: 2025-11-25 20:51:23.61363431 +0000 UTC m=+0.086300290 container exec 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:51:23 compute-0 ceph-mon[75144]: pgmap v1392: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:23 compute-0 podman[276414]: 2025-11-25 20:51:23.745434527 +0000 UTC m=+0.218100467 container exec_died 3091c900b6c1c9ad347035dddce67ded4e78762048ba069fd968ecace3c6576a (image=quay.io/ceph/ceph:v18, name=ceph-712dd110-763a-5547-8ef7-acda1414fdce-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:51:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1393: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:24 compute-0 sudo[276315]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:51:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:51:24 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:24 compute-0 sudo[276535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:24 compute-0 sudo[276535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:24 compute-0 sudo[276535]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:24 compute-0 sudo[276560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:51:24 compute-0 sudo[276560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:24 compute-0 sudo[276560]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:24 compute-0 sudo[276585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:24 compute-0 sudo[276585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:24 compute-0 sudo[276585]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:24 compute-0 sudo[276610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:51:24 compute-0 sudo[276610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:25 compute-0 sudo[276610]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:51:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:51:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:51:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:25 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 70d64f3c-c09f-4bf7-8c11-b4a39172c5c3 does not exist
Nov 25 20:51:25 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 0c8dd40b-51aa-4bb5-be3d-dbd22d07573d does not exist
Nov 25 20:51:25 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 0e56a2c7-8ee7-4470-bbb1-e3f1e74d672b does not exist
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:51:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:51:25 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:51:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: pgmap v1393: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:51:25 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:51:25 compute-0 sudo[276667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:25 compute-0 sudo[276667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:25 compute-0 sudo[276667]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:25 compute-0 sudo[276692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:51:25 compute-0 sudo[276692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:25 compute-0 sudo[276692]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:25 compute-0 sudo[276717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:25 compute-0 sudo[276717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:25 compute-0 sudo[276717]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:25 compute-0 sudo[276742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:51:25 compute-0 sudo[276742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1394: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.135184371 +0000 UTC m=+0.054084053 container create 83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:51:26 compute-0 systemd[1]: Started libpod-conmon-83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e.scope.
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.110640542 +0000 UTC m=+0.029540244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:51:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.325162202 +0000 UTC m=+0.244061944 container init 83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.336040368 +0000 UTC m=+0.254940090 container start 83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:51:26 compute-0 happy_clarke[276824]: 167 167
Nov 25 20:51:26 compute-0 systemd[1]: libpod-83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e.scope: Deactivated successfully.
Nov 25 20:51:26 compute-0 conmon[276824]: conmon 83ddc60d5a00f66ee3b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e.scope/container/memory.events
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.396126423 +0000 UTC m=+0.315026105 container attach 83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.397663955 +0000 UTC m=+0.316563667 container died 83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00560d413b8ee527c872e59cc086edd6e5c976435347b862fd73971bc7343f3-merged.mount: Deactivated successfully.
Nov 25 20:51:26 compute-0 podman[276807]: 2025-11-25 20:51:26.654360332 +0000 UTC m=+0.573260024 container remove 83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:51:26 compute-0 systemd[1]: libpod-conmon-83ddc60d5a00f66ee3b77a7b14cddf63eb221f8c4580bcb23294ded8d15c301e.scope: Deactivated successfully.
Nov 25 20:51:26 compute-0 podman[276849]: 2025-11-25 20:51:26.84828595 +0000 UTC m=+0.049892709 container create 29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:51:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:51:26 compute-0 systemd[1]: Started libpod-conmon-29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a.scope.
Nov 25 20:51:26 compute-0 podman[276849]: 2025-11-25 20:51:26.827469603 +0000 UTC m=+0.029076392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:51:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b9177940259b96bd814e1d854df7ea8659022654bf986e8ff34781d19af246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b9177940259b96bd814e1d854df7ea8659022654bf986e8ff34781d19af246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b9177940259b96bd814e1d854df7ea8659022654bf986e8ff34781d19af246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b9177940259b96bd814e1d854df7ea8659022654bf986e8ff34781d19af246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b9177940259b96bd814e1d854df7ea8659022654bf986e8ff34781d19af246/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:26 compute-0 podman[276849]: 2025-11-25 20:51:26.94561775 +0000 UTC m=+0.147224579 container init 29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:51:26 compute-0 podman[276849]: 2025-11-25 20:51:26.957243396 +0000 UTC m=+0.158850155 container start 29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:51:26 compute-0 podman[276849]: 2025-11-25 20:51:26.961218874 +0000 UTC m=+0.162825713 container attach 29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:51:27 compute-0 ceph-mon[75144]: pgmap v1394: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1395: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:28 compute-0 adoring_neumann[276866]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:51:28 compute-0 adoring_neumann[276866]: --> relative data size: 1.0
Nov 25 20:51:28 compute-0 adoring_neumann[276866]: --> All data devices are unavailable
Nov 25 20:51:28 compute-0 systemd[1]: libpod-29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a.scope: Deactivated successfully.
Nov 25 20:51:28 compute-0 podman[276849]: 2025-11-25 20:51:28.155019486 +0000 UTC m=+1.356626235 container died 29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 20:51:28 compute-0 systemd[1]: libpod-29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a.scope: Consumed 1.151s CPU time.
Nov 25 20:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b9177940259b96bd814e1d854df7ea8659022654bf986e8ff34781d19af246-merged.mount: Deactivated successfully.
Nov 25 20:51:28 compute-0 podman[276849]: 2025-11-25 20:51:28.227365575 +0000 UTC m=+1.428972364 container remove 29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:51:28 compute-0 systemd[1]: libpod-conmon-29468811b155c6a82c0b29d3087ab386ebb94088b90264b10116f9136d96eb6a.scope: Deactivated successfully.
Nov 25 20:51:28 compute-0 sudo[276742]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:28 compute-0 sudo[276908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:28 compute-0 sudo[276908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:28 compute-0 sudo[276908]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:28 compute-0 sudo[276933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:51:28 compute-0 sudo[276933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:28 compute-0 sudo[276933]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:28 compute-0 sudo[276958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:28 compute-0 sudo[276958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:28 compute-0 sudo[276958]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:28 compute-0 sudo[276983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:51:28 compute-0 sudo[276983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.169341314 +0000 UTC m=+0.062580964 container create bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:51:29 compute-0 systemd[1]: Started libpod-conmon-bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc.scope.
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.149265757 +0000 UTC m=+0.042505447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:51:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.269198192 +0000 UTC m=+0.162437922 container init bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.282535495 +0000 UTC m=+0.175775175 container start bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.286669357 +0000 UTC m=+0.179909037 container attach bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:51:29 compute-0 elegant_tu[277063]: 167 167
Nov 25 20:51:29 compute-0 systemd[1]: libpod-bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc.scope: Deactivated successfully.
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.290457651 +0000 UTC m=+0.183697371 container died bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 20:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-889c83bf8f17174292a9675469d1841e4a6082dbfd84158e0d24f436ad7219ce-merged.mount: Deactivated successfully.
Nov 25 20:51:29 compute-0 podman[277046]: 2025-11-25 20:51:29.345260272 +0000 UTC m=+0.238499942 container remove bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 20:51:29 compute-0 systemd[1]: libpod-conmon-bd4ab6a7fb772ca7eb37672f8d1a1ca5384bc67c0f7e6b2d66113be0699ce9fc.scope: Deactivated successfully.
Nov 25 20:51:29 compute-0 ceph-mon[75144]: pgmap v1395: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:29 compute-0 podman[277085]: 2025-11-25 20:51:29.567617964 +0000 UTC m=+0.061362290 container create dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hofstadter, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:51:29 compute-0 systemd[1]: Started libpod-conmon-dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7.scope.
Nov 25 20:51:29 compute-0 podman[277085]: 2025-11-25 20:51:29.537896265 +0000 UTC m=+0.031640651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:51:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0963f5a0e566552a4f291d8804c4b2c00787ff24f5df095884ba1921e52e5467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0963f5a0e566552a4f291d8804c4b2c00787ff24f5df095884ba1921e52e5467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0963f5a0e566552a4f291d8804c4b2c00787ff24f5df095884ba1921e52e5467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0963f5a0e566552a4f291d8804c4b2c00787ff24f5df095884ba1921e52e5467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:29 compute-0 podman[277085]: 2025-11-25 20:51:29.656607986 +0000 UTC m=+0.150352302 container init dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:51:29 compute-0 podman[277085]: 2025-11-25 20:51:29.670661039 +0000 UTC m=+0.164405365 container start dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:51:29 compute-0 podman[277085]: 2025-11-25 20:51:29.675057509 +0000 UTC m=+0.168801835 container attach dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:51:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1396: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]: {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:     "0": [
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:         {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "devices": [
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "/dev/loop3"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             ],
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_name": "ceph_lv0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_size": "21470642176",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "name": "ceph_lv0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "tags": {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cluster_name": "ceph",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.crush_device_class": "",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.encrypted": "0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osd_id": "0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.type": "block",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.vdo": "0"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             },
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "type": "block",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "vg_name": "ceph_vg0"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:         }
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:     ],
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:     "1": [
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:         {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "devices": [
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "/dev/loop4"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             ],
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_name": "ceph_lv1",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_size": "21470642176",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "name": "ceph_lv1",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "tags": {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cluster_name": "ceph",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.crush_device_class": "",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.encrypted": "0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osd_id": "1",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.type": "block",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.vdo": "0"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             },
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "type": "block",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "vg_name": "ceph_vg1"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:         }
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:     ],
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:     "2": [
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:         {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "devices": [
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "/dev/loop5"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             ],
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_name": "ceph_lv2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_size": "21470642176",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "name": "ceph_lv2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "tags": {
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.cluster_name": "ceph",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.crush_device_class": "",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.encrypted": "0",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osd_id": "2",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.type": "block",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:                 "ceph.vdo": "0"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             },
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "type": "block",
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:             "vg_name": "ceph_vg2"
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:         }
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]:     ]
Nov 25 20:51:30 compute-0 magical_hofstadter[277102]: }
Nov 25 20:51:30 compute-0 systemd[1]: libpod-dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7.scope: Deactivated successfully.
Nov 25 20:51:30 compute-0 podman[277085]: 2025-11-25 20:51:30.391587391 +0000 UTC m=+0.885331727 container died dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hofstadter, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 20:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0963f5a0e566552a4f291d8804c4b2c00787ff24f5df095884ba1921e52e5467-merged.mount: Deactivated successfully.
Nov 25 20:51:30 compute-0 podman[277085]: 2025-11-25 20:51:30.469487191 +0000 UTC m=+0.963231497 container remove dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 20:51:30 compute-0 systemd[1]: libpod-conmon-dd7d17db0d97ca98b103e9ed488d151c13c023199ad5cbe65c27e138c97054d7.scope: Deactivated successfully.
Nov 25 20:51:30 compute-0 sudo[276983]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:30 compute-0 sudo[277125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:30 compute-0 sudo[277125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:30 compute-0 sudo[277125]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:30 compute-0 sudo[277150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:51:30 compute-0 sudo[277150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:30 compute-0 sudo[277150]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:30 compute-0 sudo[277175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:30 compute-0 sudo[277175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:30 compute-0 sudo[277175]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:30 compute-0 sudo[277200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:51:30 compute-0 sudo[277200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.347638653 +0000 UTC m=+0.065290778 container create 5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bhabha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 20:51:31 compute-0 systemd[1]: Started libpod-conmon-5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd.scope.
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.319007324 +0000 UTC m=+0.036659499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:51:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.443577134 +0000 UTC m=+0.161229319 container init 5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bhabha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.453647018 +0000 UTC m=+0.171299133 container start 5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.457836292 +0000 UTC m=+0.175488477 container attach 5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 20:51:31 compute-0 gracious_bhabha[277282]: 167 167
Nov 25 20:51:31 compute-0 systemd[1]: libpod-5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd.scope: Deactivated successfully.
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.460790742 +0000 UTC m=+0.178442867 container died 5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d4a6a85fd83ab6993d325023b0e72725a30b7a20935a29e24f4ac5b965b7100-merged.mount: Deactivated successfully.
Nov 25 20:51:31 compute-0 podman[277266]: 2025-11-25 20:51:31.51543055 +0000 UTC m=+0.233082665 container remove 5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:51:31 compute-0 systemd[1]: libpod-conmon-5304b655336676889ad29562ac2223013efada9de2f6301f32d2f2af89bc6efd.scope: Deactivated successfully.
Nov 25 20:51:31 compute-0 ceph-mon[75144]: pgmap v1396: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:31 compute-0 podman[277306]: 2025-11-25 20:51:31.705275227 +0000 UTC m=+0.031134019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:51:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1397: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:32 compute-0 podman[277306]: 2025-11-25 20:51:32.039168065 +0000 UTC m=+0.365026807 container create 6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:51:32 compute-0 systemd[1]: Started libpod-conmon-6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012.scope.
Nov 25 20:51:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b27c9d5b50e8df84503add53a87bceb93ea0dfe76c534ba76cfac91542c08a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b27c9d5b50e8df84503add53a87bceb93ea0dfe76c534ba76cfac91542c08a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b27c9d5b50e8df84503add53a87bceb93ea0dfe76c534ba76cfac91542c08a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b27c9d5b50e8df84503add53a87bceb93ea0dfe76c534ba76cfac91542c08a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:51:32 compute-0 podman[277306]: 2025-11-25 20:51:32.153149086 +0000 UTC m=+0.479007888 container init 6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:51:32 compute-0 podman[277306]: 2025-11-25 20:51:32.168469844 +0000 UTC m=+0.494328576 container start 6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:51:32 compute-0 podman[277306]: 2025-11-25 20:51:32.173381718 +0000 UTC m=+0.499240460 container attach 6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 20:51:33 compute-0 blissful_haslett[277323]: {
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "osd_id": 2,
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "type": "bluestore"
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:     },
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "osd_id": 1,
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "type": "bluestore"
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:     },
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "osd_id": 0,
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:         "type": "bluestore"
Nov 25 20:51:33 compute-0 blissful_haslett[277323]:     }
Nov 25 20:51:33 compute-0 blissful_haslett[277323]: }
Nov 25 20:51:33 compute-0 systemd[1]: libpod-6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012.scope: Deactivated successfully.
Nov 25 20:51:33 compute-0 systemd[1]: libpod-6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012.scope: Consumed 1.160s CPU time.
Nov 25 20:51:33 compute-0 podman[277356]: 2025-11-25 20:51:33.381525121 +0000 UTC m=+0.040599107 container died 6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:51:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-11b27c9d5b50e8df84503add53a87bceb93ea0dfe76c534ba76cfac91542c08a-merged.mount: Deactivated successfully.
Nov 25 20:51:33 compute-0 podman[277356]: 2025-11-25 20:51:33.537024572 +0000 UTC m=+0.196098528 container remove 6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:51:33 compute-0 systemd[1]: libpod-conmon-6893709b34800d62a4dc0b6e2578c6bd6d7639eb26649ef18f75c88d97362012.scope: Deactivated successfully.
Nov 25 20:51:33 compute-0 ceph-mon[75144]: pgmap v1397: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:33 compute-0 sudo[277200]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:51:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:51:33 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:33 compute-0 sudo[277370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:51:33 compute-0 sudo[277370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:33 compute-0 sudo[277370]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:33 compute-0 sudo[277395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:51:33 compute-0 sudo[277395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:51:33 compute-0 sudo[277395]: pam_unix(sudo:session): session closed for user root
Nov 25 20:51:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1398: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:51:35 compute-0 ceph-mon[75144]: pgmap v1398: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1399: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:37 compute-0 ceph-mon[75144]: pgmap v1399: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1400: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:39 compute-0 ceph-mon[75144]: pgmap v1400: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1401: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:41 compute-0 ceph-mon[75144]: pgmap v1401: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1402: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:43 compute-0 ceph-mon[75144]: pgmap v1402: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1403: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:44 compute-0 podman[277420]: 2025-11-25 20:51:44.010780776 +0000 UTC m=+0.098684487 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:51:45 compute-0 ceph-mon[75144]: pgmap v1403: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1404: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:46 compute-0 podman[277440]: 2025-11-25 20:51:46.011538692 +0000 UTC m=+0.101570655 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:51:47 compute-0 ceph-mon[75144]: pgmap v1404: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1405: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:51:48.970 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:51:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:51:48.971 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:51:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:51:48.971 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:51:49 compute-0 ceph-mon[75144]: pgmap v1405: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1406: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:51 compute-0 nova_compute[248866]: 2025-11-25 20:51:51.085 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:51 compute-0 nova_compute[248866]: 2025-11-25 20:51:51.086 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:51 compute-0 ceph-mon[75144]: pgmap v1406: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1407: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:52 compute-0 nova_compute[248866]: 2025-11-25 20:51:52.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:53 compute-0 podman[277460]: 2025-11-25 20:51:53.050137249 +0000 UTC m=+0.132725063 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 20:51:53 compute-0 ceph-mon[75144]: pgmap v1407: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1408: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:55 compute-0 nova_compute[248866]: 2025-11-25 20:51:55.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:55 compute-0 nova_compute[248866]: 2025-11-25 20:51:55.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:51:55 compute-0 ceph-mon[75144]: pgmap v1408: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1409: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:51:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:51:57
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:51:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:51:57 compute-0 ceph-mon[75144]: pgmap v1409: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1410: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:58 compute-0 nova_compute[248866]: 2025-11-25 20:51:58.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.102 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.103 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.103 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.103 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.104 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:51:59 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:51:59 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215993649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.589 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:51:59 compute-0 ceph-mon[75144]: pgmap v1410: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:51:59 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3215993649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.810 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.811 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5286MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.811 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:51:59 compute-0 nova_compute[248866]: 2025-11-25 20:51:59.812 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:52:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1411: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.065 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.066 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.135 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing inventories for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.211 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating ProviderTree inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.212 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.224 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing aggregate associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.242 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing trait associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, traits: HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.265 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:52:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:52:00 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047854014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:52:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.743 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.753 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.779 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.782 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:52:00 compute-0 nova_compute[248866]: 2025-11-25 20:52:00.783 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:52:01 compute-0 ceph-mon[75144]: pgmap v1411: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:01 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3047854014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:52:01 compute-0 nova_compute[248866]: 2025-11-25 20:52:01.783 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:01 compute-0 nova_compute[248866]: 2025-11-25 20:52:01.784 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:52:01 compute-0 nova_compute[248866]: 2025-11-25 20:52:01.784 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:52:01 compute-0 nova_compute[248866]: 2025-11-25 20:52:01.802 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:52:01 compute-0 nova_compute[248866]: 2025-11-25 20:52:01.803 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1412: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:52:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:52:03 compute-0 ceph-mon[75144]: pgmap v1412: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1413: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:05 compute-0 ceph-mon[75144]: pgmap v1413: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1414: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:07 compute-0 ceph-mon[75144]: pgmap v1414: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1415: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:09 compute-0 ceph-mon[75144]: pgmap v1415: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1416: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:11 compute-0 ceph-mon[75144]: pgmap v1416: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1417: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:13 compute-0 nova_compute[248866]: 2025-11-25 20:52:13.057 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:13 compute-0 ceph-mon[75144]: pgmap v1417: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1418: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:14 compute-0 podman[277530]: 2025-11-25 20:52:14.999234077 +0000 UTC m=+0.086444834 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:52:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:15 compute-0 ceph-mon[75144]: pgmap v1418: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1419: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:16 compute-0 podman[277549]: 2025-11-25 20:52:16.959132182 +0000 UTC m=+0.059278574 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:52:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:52:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2676314543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:52:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:52:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2676314543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:52:17 compute-0 ceph-mon[75144]: pgmap v1419: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2676314543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:52:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2676314543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:52:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1420: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:19 compute-0 ceph-mon[75144]: pgmap v1420: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1421: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:21 compute-0 ceph-mon[75144]: pgmap v1421: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1422: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:23 compute-0 ceph-mon[75144]: pgmap v1422: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1423: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:24 compute-0 podman[277569]: 2025-11-25 20:52:24.061878573 +0000 UTC m=+0.144290458 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 20:52:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:25 compute-0 ceph-mon[75144]: pgmap v1423: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.843509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103945843527, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2308, "num_deletes": 506, "total_data_size": 2332631, "memory_usage": 2388800, "flush_reason": "Manual Compaction"}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103945858138, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2272848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26983, "largest_seqno": 29290, "table_properties": {"data_size": 2262742, "index_size": 6024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 22845, "raw_average_key_size": 19, "raw_value_size": 2240689, "raw_average_value_size": 1889, "num_data_blocks": 271, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103712, "oldest_key_time": 1764103712, "file_creation_time": 1764103945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14659 microseconds, and 4883 cpu microseconds.
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.858166) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2272848 bytes OK
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.858180) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.859847) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.859872) EVENT_LOG_v1 {"time_micros": 1764103945859865, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.859893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2321993, prev total WAL file size 2321993, number of live WAL files 2.
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.860954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2219KB)], [65(4357KB)]
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103945860986, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 6735173, "oldest_snapshot_seqno": -1}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4373 keys, 5507617 bytes, temperature: kUnknown
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103945902059, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 5507617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5478715, "index_size": 16888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 108264, "raw_average_key_size": 24, "raw_value_size": 5400316, "raw_average_value_size": 1234, "num_data_blocks": 707, "num_entries": 4373, "num_filter_entries": 4373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764103945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.902293) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 5507617 bytes
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.903780) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 133.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.3 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 5398, records dropped: 1025 output_compression: NoCompression
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.903811) EVENT_LOG_v1 {"time_micros": 1764103945903788, "job": 36, "event": "compaction_finished", "compaction_time_micros": 41157, "compaction_time_cpu_micros": 25480, "output_level": 6, "num_output_files": 1, "total_output_size": 5507617, "num_input_records": 5398, "num_output_records": 4373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103945904485, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764103945905534, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.860909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.905583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.905587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.905589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.905590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:52:25 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:52:25.905592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1424: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:52:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:52:27 compute-0 ceph-mon[75144]: pgmap v1424: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1425: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:29 compute-0 ceph-mon[75144]: pgmap v1425: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1426: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:31 compute-0 ceph-mon[75144]: pgmap v1426: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1427: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:33 compute-0 sudo[277595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:33 compute-0 sudo[277595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:33 compute-0 sudo[277595]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:33 compute-0 ceph-mon[75144]: pgmap v1427: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:33 compute-0 sudo[277620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:52:33 compute-0 sudo[277620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:33 compute-0 sudo[277620]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:34 compute-0 sudo[277645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:34 compute-0 sudo[277645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:34 compute-0 sudo[277645]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1428: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:34 compute-0 sudo[277670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:52:34 compute-0 sudo[277670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:34 compute-0 sudo[277670]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:52:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:52:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:52:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:52:34 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev b113a10c-3be1-4aa7-99ec-6af48dfa6146 does not exist
Nov 25 20:52:34 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 21350def-78ca-4e42-943a-70c38a80dcbc does not exist
Nov 25 20:52:34 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev d81d4d0c-b7a3-4b47-bf21-9d023901f1b9 does not exist
Nov 25 20:52:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:52:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:52:34 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:52:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:52:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:52:34 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:52:34 compute-0 sudo[277728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:34 compute-0 sudo[277728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:34 compute-0 sudo[277728]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:35 compute-0 sudo[277753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:52:35 compute-0 sudo[277753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:35 compute-0 sudo[277753]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:35 compute-0 sudo[277778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:35 compute-0 sudo[277778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:35 compute-0 sudo[277778]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:35 compute-0 sudo[277803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:52:35 compute-0 sudo[277803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.690288944 +0000 UTC m=+0.065347230 container create ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:52:35 compute-0 systemd[1]: Started libpod-conmon-ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7.scope.
Nov 25 20:52:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.66296019 +0000 UTC m=+0.038018536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:52:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.792431214 +0000 UTC m=+0.167489500 container init ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swanson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.804835911 +0000 UTC m=+0.179894207 container start ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.808981834 +0000 UTC m=+0.184040100 container attach ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 20:52:35 compute-0 reverent_swanson[277884]: 167 167
Nov 25 20:52:35 compute-0 systemd[1]: libpod-ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7.scope: Deactivated successfully.
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.813695092 +0000 UTC m=+0.188753388 container died ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swanson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-58ea2625e8cb8619b887d7702619a2f8ed771068cbc8b8d8589e472600c8196a-merged.mount: Deactivated successfully.
Nov 25 20:52:35 compute-0 podman[277868]: 2025-11-25 20:52:35.866345546 +0000 UTC m=+0.241403812 container remove ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:52:35 compute-0 systemd[1]: libpod-conmon-ff46f97b299c14ff67f12b532b33cb9e346ff1b153ef388a90bffd2cf8eadce7.scope: Deactivated successfully.
Nov 25 20:52:35 compute-0 ceph-mon[75144]: pgmap v1428: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1429: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:36 compute-0 podman[277907]: 2025-11-25 20:52:36.120884994 +0000 UTC m=+0.068122966 container create b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_almeida, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:52:36 compute-0 podman[277907]: 2025-11-25 20:52:36.093048316 +0000 UTC m=+0.040286338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:52:36 compute-0 systemd[1]: Started libpod-conmon-b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39.scope.
Nov 25 20:52:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db36b472c9a81cac55c4fb13918dd49cbc38f8882c9502553fe8afd168bf73fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db36b472c9a81cac55c4fb13918dd49cbc38f8882c9502553fe8afd168bf73fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db36b472c9a81cac55c4fb13918dd49cbc38f8882c9502553fe8afd168bf73fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db36b472c9a81cac55c4fb13918dd49cbc38f8882c9502553fe8afd168bf73fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db36b472c9a81cac55c4fb13918dd49cbc38f8882c9502553fe8afd168bf73fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:36 compute-0 podman[277907]: 2025-11-25 20:52:36.25965067 +0000 UTC m=+0.206888632 container init b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:52:36 compute-0 podman[277907]: 2025-11-25 20:52:36.27172129 +0000 UTC m=+0.218959262 container start b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_almeida, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:52:36 compute-0 podman[277907]: 2025-11-25 20:52:36.276444787 +0000 UTC m=+0.223682809 container attach b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:52:36 compute-0 ceph-mon[75144]: pgmap v1429: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:37 compute-0 epic_almeida[277924]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:52:37 compute-0 epic_almeida[277924]: --> relative data size: 1.0
Nov 25 20:52:37 compute-0 epic_almeida[277924]: --> All data devices are unavailable
Nov 25 20:52:37 compute-0 systemd[1]: libpod-b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39.scope: Deactivated successfully.
Nov 25 20:52:37 compute-0 podman[277907]: 2025-11-25 20:52:37.459241261 +0000 UTC m=+1.406479223 container died b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:52:37 compute-0 systemd[1]: libpod-b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39.scope: Consumed 1.137s CPU time.
Nov 25 20:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-db36b472c9a81cac55c4fb13918dd49cbc38f8882c9502553fe8afd168bf73fe-merged.mount: Deactivated successfully.
Nov 25 20:52:37 compute-0 podman[277907]: 2025-11-25 20:52:37.544977544 +0000 UTC m=+1.492215516 container remove b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:52:37 compute-0 systemd[1]: libpod-conmon-b1f0b805ddd7b62bbf22c916ab1f62f0286a754575275d9c6e9ab067785f5c39.scope: Deactivated successfully.
Nov 25 20:52:37 compute-0 sudo[277803]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:37 compute-0 sudo[277969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:37 compute-0 sudo[277969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:37 compute-0 sudo[277969]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:37 compute-0 sudo[277994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:52:37 compute-0 sudo[277994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:37 compute-0 sudo[277994]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:37 compute-0 sudo[278019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:37 compute-0 sudo[278019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:37 compute-0 sudo[278019]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:37 compute-0 sudo[278044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:52:37 compute-0 sudo[278044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1430: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.441682311 +0000 UTC m=+0.070928852 container create 56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:52:38 compute-0 systemd[1]: Started libpod-conmon-56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8.scope.
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.413439442 +0000 UTC m=+0.042686053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:52:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.54306893 +0000 UTC m=+0.172315541 container init 56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.556698801 +0000 UTC m=+0.185945342 container start 56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.561152233 +0000 UTC m=+0.190398834 container attach 56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:52:38 compute-0 xenodochial_hopper[278126]: 167 167
Nov 25 20:52:38 compute-0 systemd[1]: libpod-56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8.scope: Deactivated successfully.
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.56361326 +0000 UTC m=+0.192859811 container died 56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 20:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fae491914b45158cec1b9c64eb3c718a78727d1eaf79a12058a49a27c3d7ebf-merged.mount: Deactivated successfully.
Nov 25 20:52:38 compute-0 podman[278110]: 2025-11-25 20:52:38.613323562 +0000 UTC m=+0.242570113 container remove 56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:52:38 compute-0 systemd[1]: libpod-conmon-56ae6767d4e17f9854d4b6f999f0783d8cc64e4ea7fa4499680898b5627ed7a8.scope: Deactivated successfully.
Nov 25 20:52:38 compute-0 podman[278151]: 2025-11-25 20:52:38.821719485 +0000 UTC m=+0.061514946 container create 2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:52:38 compute-0 systemd[1]: Started libpod-conmon-2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074.scope.
Nov 25 20:52:38 compute-0 podman[278151]: 2025-11-25 20:52:38.794269467 +0000 UTC m=+0.034064988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:52:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5ab8449096784661f062108e02a8f03c980185a1c68e30ac0797c3e4a8b0f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5ab8449096784661f062108e02a8f03c980185a1c68e30ac0797c3e4a8b0f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5ab8449096784661f062108e02a8f03c980185a1c68e30ac0797c3e4a8b0f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5ab8449096784661f062108e02a8f03c980185a1c68e30ac0797c3e4a8b0f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:38 compute-0 podman[278151]: 2025-11-25 20:52:38.933890168 +0000 UTC m=+0.173685669 container init 2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:52:38 compute-0 podman[278151]: 2025-11-25 20:52:38.93948246 +0000 UTC m=+0.179277931 container start 2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:52:38 compute-0 podman[278151]: 2025-11-25 20:52:38.943231032 +0000 UTC m=+0.183026533 container attach 2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:52:39 compute-0 ceph-mon[75144]: pgmap v1430: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]: {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:     "0": [
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:         {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "devices": [
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "/dev/loop3"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             ],
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_name": "ceph_lv0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_size": "21470642176",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "name": "ceph_lv0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "tags": {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cluster_name": "ceph",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.crush_device_class": "",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.encrypted": "0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osd_id": "0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.type": "block",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.vdo": "0"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             },
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "type": "block",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "vg_name": "ceph_vg0"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:         }
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:     ],
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:     "1": [
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:         {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "devices": [
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "/dev/loop4"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             ],
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_name": "ceph_lv1",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_size": "21470642176",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "name": "ceph_lv1",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "tags": {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cluster_name": "ceph",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.crush_device_class": "",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.encrypted": "0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osd_id": "1",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.type": "block",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.vdo": "0"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             },
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "type": "block",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "vg_name": "ceph_vg1"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:         }
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:     ],
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:     "2": [
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:         {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "devices": [
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "/dev/loop5"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             ],
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_name": "ceph_lv2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_size": "21470642176",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "name": "ceph_lv2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "tags": {
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.cluster_name": "ceph",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.crush_device_class": "",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.encrypted": "0",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osd_id": "2",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.type": "block",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:                 "ceph.vdo": "0"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             },
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "type": "block",
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:             "vg_name": "ceph_vg2"
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:         }
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]:     ]
Nov 25 20:52:39 compute-0 unruffled_antonelli[278167]: }
Nov 25 20:52:39 compute-0 systemd[1]: libpod-2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074.scope: Deactivated successfully.
Nov 25 20:52:39 compute-0 podman[278151]: 2025-11-25 20:52:39.738347543 +0000 UTC m=+0.978143014 container died 2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_antonelli, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a5ab8449096784661f062108e02a8f03c980185a1c68e30ac0797c3e4a8b0f8-merged.mount: Deactivated successfully.
Nov 25 20:52:39 compute-0 podman[278151]: 2025-11-25 20:52:39.819276766 +0000 UTC m=+1.059072227 container remove 2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_antonelli, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 20:52:39 compute-0 systemd[1]: libpod-conmon-2611ee4d970a1da025f3819eeabd31c4376b62c092ab211f845721542e337074.scope: Deactivated successfully.
Nov 25 20:52:39 compute-0 sudo[278044]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:39 compute-0 sudo[278188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:39 compute-0 sudo[278188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:39 compute-0 sudo[278188]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:40 compute-0 sudo[278213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:52:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1431: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:40 compute-0 sudo[278213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:40 compute-0 sudo[278213]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:40 compute-0 sudo[278238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:40 compute-0 sudo[278238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:40 compute-0 sudo[278238]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:40 compute-0 sudo[278263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:52:40 compute-0 sudo[278263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.633982851 +0000 UTC m=+0.070460959 container create 5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 20:52:40 compute-0 systemd[1]: Started libpod-conmon-5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8.scope.
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.607286584 +0000 UTC m=+0.043764742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:52:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.739784261 +0000 UTC m=+0.176262359 container init 5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:52:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.752038674 +0000 UTC m=+0.188516772 container start 5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.756465915 +0000 UTC m=+0.192944013 container attach 5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:52:40 compute-0 hungry_jang[278345]: 167 167
Nov 25 20:52:40 compute-0 systemd[1]: libpod-5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8.scope: Deactivated successfully.
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.761528732 +0000 UTC m=+0.198006870 container died 5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jang, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde896f5f8c1e7877056a20229f2cd5e4aad3e68ee6401c6c0a9ccdb6275881a-merged.mount: Deactivated successfully.
Nov 25 20:52:40 compute-0 podman[278329]: 2025-11-25 20:52:40.809410685 +0000 UTC m=+0.245888793 container remove 5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:52:40 compute-0 systemd[1]: libpod-conmon-5890b96cf7eb77b905705c418932d519918bd5288538786122a05c5fc01c72a8.scope: Deactivated successfully.
Nov 25 20:52:41 compute-0 podman[278371]: 2025-11-25 20:52:41.075840177 +0000 UTC m=+0.068911227 container create 6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:52:41 compute-0 ceph-mon[75144]: pgmap v1431: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:41 compute-0 systemd[1]: Started libpod-conmon-6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c.scope.
Nov 25 20:52:41 compute-0 podman[278371]: 2025-11-25 20:52:41.047393842 +0000 UTC m=+0.040464942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:52:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/644755dd3ab97adf7c1461cf9e2e054eed5aaf2186678f0428fe70226ea21a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/644755dd3ab97adf7c1461cf9e2e054eed5aaf2186678f0428fe70226ea21a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/644755dd3ab97adf7c1461cf9e2e054eed5aaf2186678f0428fe70226ea21a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/644755dd3ab97adf7c1461cf9e2e054eed5aaf2186678f0428fe70226ea21a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:52:41 compute-0 podman[278371]: 2025-11-25 20:52:41.200381627 +0000 UTC m=+0.193452697 container init 6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_elbakyan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:52:41 compute-0 podman[278371]: 2025-11-25 20:52:41.217942414 +0000 UTC m=+0.211013464 container start 6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_elbakyan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:52:41 compute-0 podman[278371]: 2025-11-25 20:52:41.222578101 +0000 UTC m=+0.215649161 container attach 6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:52:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1432: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]: {
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "osd_id": 2,
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "type": "bluestore"
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:     },
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "osd_id": 1,
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "type": "bluestore"
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:     },
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "osd_id": 0,
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:         "type": "bluestore"
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]:     }
Nov 25 20:52:42 compute-0 pedantic_elbakyan[278387]: }
Nov 25 20:52:42 compute-0 systemd[1]: libpod-6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c.scope: Deactivated successfully.
Nov 25 20:52:42 compute-0 podman[278371]: 2025-11-25 20:52:42.473737975 +0000 UTC m=+1.466809005 container died 6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:52:42 compute-0 systemd[1]: libpod-6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c.scope: Consumed 1.260s CPU time.
Nov 25 20:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-644755dd3ab97adf7c1461cf9e2e054eed5aaf2186678f0428fe70226ea21a07-merged.mount: Deactivated successfully.
Nov 25 20:52:42 compute-0 podman[278371]: 2025-11-25 20:52:42.542558208 +0000 UTC m=+1.535629228 container remove 6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:52:42 compute-0 systemd[1]: libpod-conmon-6a3e3f63ae0f4aed8cd4aa9b4c53cf95fe5f5255438bdac66545e86ebfa07e0c.scope: Deactivated successfully.
Nov 25 20:52:42 compute-0 sudo[278263]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:52:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:52:42 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:52:42 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:52:42 compute-0 sudo[278432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:52:42 compute-0 sudo[278432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:42 compute-0 sudo[278432]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:42 compute-0 sudo[278457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:52:42 compute-0 sudo[278457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:52:42 compute-0 sudo[278457]: pam_unix(sudo:session): session closed for user root
Nov 25 20:52:43 compute-0 ceph-mon[75144]: pgmap v1432: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:52:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:52:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1433: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:45 compute-0 ceph-mon[75144]: pgmap v1433: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:45 compute-0 podman[278482]: 2025-11-25 20:52:45.986187275 +0000 UTC m=+0.081198961 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:52:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1434: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:47 compute-0 ceph-mon[75144]: pgmap v1434: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:48 compute-0 podman[278502]: 2025-11-25 20:52:48.013098054 +0000 UTC m=+0.105827292 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:52:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1435: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:52:48.971 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:52:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:52:48.972 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:52:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:52:48.972 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:52:49 compute-0 ceph-mon[75144]: pgmap v1435: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1436: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:51 compute-0 nova_compute[248866]: 2025-11-25 20:52:51.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:51 compute-0 ceph-mon[75144]: pgmap v1436: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1437: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:53 compute-0 nova_compute[248866]: 2025-11-25 20:52:53.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:53 compute-0 ceph-mon[75144]: pgmap v1437: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1438: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:54 compute-0 nova_compute[248866]: 2025-11-25 20:52:54.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:55 compute-0 podman[278522]: 2025-11-25 20:52:55.054586157 +0000 UTC m=+0.152755198 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:52:55 compute-0 ceph-mon[75144]: pgmap v1438: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1439: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:56 compute-0 nova_compute[248866]: 2025-11-25 20:52:56.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:52:56 compute-0 nova_compute[248866]: 2025-11-25 20:52:56.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:52:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:52:57
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'backups', 'vms']
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:52:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:52:57 compute-0 ceph-mon[75144]: pgmap v1439: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1440: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:52:59 compute-0 ceph-mon[75144]: pgmap v1440: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:00 compute-0 nova_compute[248866]: 2025-11-25 20:53:00.038 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:00 compute-0 nova_compute[248866]: 2025-11-25 20:53:00.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1441: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:00 compute-0 nova_compute[248866]: 2025-11-25 20:53:00.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.077 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.077 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.077 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.077 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.078 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:53:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:53:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4225919244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.533 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:53:01 compute-0 ceph-mon[75144]: pgmap v1441: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:01 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4225919244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.800 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.802 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5247MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.803 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.803 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.903 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.904 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:53:01 compute-0 nova_compute[248866]: 2025-11-25 20:53:01.942 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1442: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:53:02 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922400203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:53:02 compute-0 nova_compute[248866]: 2025-11-25 20:53:02.392 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:53:02 compute-0 nova_compute[248866]: 2025-11-25 20:53:02.400 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:53:02 compute-0 nova_compute[248866]: 2025-11-25 20:53:02.418 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:53:02 compute-0 nova_compute[248866]: 2025-11-25 20:53:02.420 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:53:02 compute-0 nova_compute[248866]: 2025-11-25 20:53:02.421 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:53:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:53:02 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/922400203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:53:03 compute-0 nova_compute[248866]: 2025-11-25 20:53:03.421 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:03 compute-0 nova_compute[248866]: 2025-11-25 20:53:03.422 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:53:03 compute-0 nova_compute[248866]: 2025-11-25 20:53:03.423 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:53:03 compute-0 nova_compute[248866]: 2025-11-25 20:53:03.449 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:53:03 compute-0 ceph-mon[75144]: pgmap v1442: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1443: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:05 compute-0 ceph-mon[75144]: pgmap v1443: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1444: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:07 compute-0 ceph-mon[75144]: pgmap v1444: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1445: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:09 compute-0 ceph-mon[75144]: pgmap v1445: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1446: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:11 compute-0 ceph-mon[75144]: pgmap v1446: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1447: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:13 compute-0 ceph-mon[75144]: pgmap v1447: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1448: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:15 compute-0 ceph-mon[75144]: pgmap v1448: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1449: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:16 compute-0 podman[278594]: 2025-11-25 20:53:16.97301475 +0000 UTC m=+0.075628229 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 20:53:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:53:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535875229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:53:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:53:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535875229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:53:17 compute-0 ceph-mon[75144]: pgmap v1449: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3535875229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:53:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/3535875229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:53:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1450: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:19 compute-0 podman[278613]: 2025-11-25 20:53:19.034160619 +0000 UTC m=+0.122816143 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:53:19 compute-0 ceph-mon[75144]: pgmap v1450: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1451: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:21 compute-0 ceph-mon[75144]: pgmap v1451: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1452: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:23 compute-0 ceph-mon[75144]: pgmap v1452: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1453: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:25 compute-0 ceph-mon[75144]: pgmap v1453: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1454: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:26 compute-0 podman[278633]: 2025-11-25 20:53:26.076569578 +0000 UTC m=+0.159505102 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:53:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:53:27 compute-0 ceph-mon[75144]: pgmap v1454: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1455: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:29 compute-0 ceph-mon[75144]: pgmap v1455: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1456: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:31 compute-0 ceph-mon[75144]: pgmap v1456: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1457: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:33 compute-0 ceph-mon[75144]: pgmap v1457: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1458: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:35 compute-0 ceph-mon[75144]: pgmap v1458: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1459: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:37 compute-0 ceph-mon[75144]: pgmap v1459: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1460: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:39 compute-0 ceph-mon[75144]: pgmap v1460: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1461: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:41 compute-0 ceph-mon[75144]: pgmap v1461: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1462: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:42 compute-0 sudo[278660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:42 compute-0 sudo[278660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:42 compute-0 sudo[278660]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:42 compute-0 sudo[278685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:53:42 compute-0 sudo[278685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:42 compute-0 sudo[278685]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:43 compute-0 sudo[278710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:43 compute-0 sudo[278710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:43 compute-0 sudo[278710]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:43 compute-0 sudo[278735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:53:43 compute-0 sudo[278735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:43 compute-0 sudo[278735]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:53:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:53:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:53:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:53:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 675c4002-79a7-4401-b941-fb1ff6dee2bb does not exist
Nov 25 20:53:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5de60861-14f9-4384-bad3-2f06f659b144 does not exist
Nov 25 20:53:43 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev ad0f4bb6-62ce-4f1e-95c2-606a0e34a9ba does not exist
Nov 25 20:53:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:53:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:53:43 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:53:43 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:53:43 compute-0 sudo[278792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:43 compute-0 sudo[278792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:43 compute-0 sudo[278792]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:43 compute-0 ceph-mon[75144]: pgmap v1462: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:53:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:53:43 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:53:43 compute-0 sudo[278817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:53:43 compute-0 sudo[278817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:43 compute-0 sudo[278817]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:43 compute-0 sudo[278842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:43 compute-0 sudo[278842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:43 compute-0 sudo[278842]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1463: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:44 compute-0 sudo[278867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:53:44 compute-0 sudo[278867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.507995174 +0000 UTC m=+0.062389340 container create f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 20:53:44 compute-0 systemd[1]: Started libpod-conmon-f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de.scope.
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.480016432 +0000 UTC m=+0.034410658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:53:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.617155915 +0000 UTC m=+0.171550131 container init f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.629967383 +0000 UTC m=+0.184361549 container start f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.634198828 +0000 UTC m=+0.188593044 container attach f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:53:44 compute-0 zen_feynman[278949]: 167 167
Nov 25 20:53:44 compute-0 systemd[1]: libpod-f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de.scope: Deactivated successfully.
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.63976582 +0000 UTC m=+0.194160006 container died f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-676fa8522df73cf7618df7e32e7046897fbabb804fe27daa5ba2cceeaf3d20fe-merged.mount: Deactivated successfully.
Nov 25 20:53:44 compute-0 podman[278933]: 2025-11-25 20:53:44.698328644 +0000 UTC m=+0.252722800 container remove f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:53:44 compute-0 systemd[1]: libpod-conmon-f54df7eb8e3f9123878237dbeff3f49b327688d755c81bb791b20dfa0374c3de.scope: Deactivated successfully.
Nov 25 20:53:44 compute-0 podman[278973]: 2025-11-25 20:53:44.952350487 +0000 UTC m=+0.076150303 container create 9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 20:53:45 compute-0 systemd[1]: Started libpod-conmon-9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f.scope.
Nov 25 20:53:45 compute-0 podman[278973]: 2025-11-25 20:53:44.921359634 +0000 UTC m=+0.045159510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:53:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72036eb6256168d95ab1bdb03a257408d8cca0142a01c35e0aaec13f9bb08289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72036eb6256168d95ab1bdb03a257408d8cca0142a01c35e0aaec13f9bb08289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72036eb6256168d95ab1bdb03a257408d8cca0142a01c35e0aaec13f9bb08289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72036eb6256168d95ab1bdb03a257408d8cca0142a01c35e0aaec13f9bb08289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72036eb6256168d95ab1bdb03a257408d8cca0142a01c35e0aaec13f9bb08289/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:45 compute-0 podman[278973]: 2025-11-25 20:53:45.057287104 +0000 UTC m=+0.181086910 container init 9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:53:45 compute-0 podman[278973]: 2025-11-25 20:53:45.072249541 +0000 UTC m=+0.196049327 container start 9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:53:45 compute-0 podman[278973]: 2025-11-25 20:53:45.075482189 +0000 UTC m=+0.199281965 container attach 9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 20:53:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:45 compute-0 ceph-mon[75144]: pgmap v1463: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1464: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:46 compute-0 great_banzai[278989]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:53:46 compute-0 great_banzai[278989]: --> relative data size: 1.0
Nov 25 20:53:46 compute-0 great_banzai[278989]: --> All data devices are unavailable
Nov 25 20:53:46 compute-0 systemd[1]: libpod-9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f.scope: Deactivated successfully.
Nov 25 20:53:46 compute-0 podman[278973]: 2025-11-25 20:53:46.224628906 +0000 UTC m=+1.348428722 container died 9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 20:53:46 compute-0 systemd[1]: libpod-9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f.scope: Consumed 1.112s CPU time.
Nov 25 20:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-72036eb6256168d95ab1bdb03a257408d8cca0142a01c35e0aaec13f9bb08289-merged.mount: Deactivated successfully.
Nov 25 20:53:46 compute-0 podman[278973]: 2025-11-25 20:53:46.293052819 +0000 UTC m=+1.416852615 container remove 9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:53:46 compute-0 systemd[1]: libpod-conmon-9143a9fc41a67a65d3964452095cec45e821ee6e1e6db43347abdb31a937938f.scope: Deactivated successfully.
Nov 25 20:53:46 compute-0 sudo[278867]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:46 compute-0 sudo[279030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:46 compute-0 sudo[279030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:46 compute-0 sudo[279030]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:46 compute-0 sudo[279055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:53:46 compute-0 sudo[279055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:46 compute-0 sudo[279055]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:46 compute-0 sudo[279080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:46 compute-0 sudo[279080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:46 compute-0 sudo[279080]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:46 compute-0 sudo[279105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:53:46 compute-0 sudo[279105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.209073301 +0000 UTC m=+0.056160280 container create be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 20:53:47 compute-0 systemd[1]: Started libpod-conmon-be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb.scope.
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.186202888 +0000 UTC m=+0.033289877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:53:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.313632326 +0000 UTC m=+0.160719345 container init be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.324333758 +0000 UTC m=+0.171420767 container start be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.328264704 +0000 UTC m=+0.175351713 container attach be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:53:47 compute-0 elastic_newton[279188]: 167 167
Nov 25 20:53:47 compute-0 systemd[1]: libpod-be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb.scope: Deactivated successfully.
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.33176793 +0000 UTC m=+0.178854899 container died be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 25 20:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8145b1489d7fc07dfd4ecfbc4358062829275125f63ad5c2b44270bb73872e8-merged.mount: Deactivated successfully.
Nov 25 20:53:47 compute-0 podman[279185]: 2025-11-25 20:53:47.381606306 +0000 UTC m=+0.114382793 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 20:53:47 compute-0 podman[279171]: 2025-11-25 20:53:47.388262938 +0000 UTC m=+0.235349937 container remove be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:53:47 compute-0 systemd[1]: libpod-conmon-be8ad97114a1aaca6bef3a29f172d1295f2cf3a732205bf164ffdbd26c3569fb.scope: Deactivated successfully.
Nov 25 20:53:47 compute-0 podman[279230]: 2025-11-25 20:53:47.647201365 +0000 UTC m=+0.068944257 container create b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:53:47 compute-0 systemd[1]: Started libpod-conmon-b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e.scope.
Nov 25 20:53:47 compute-0 podman[279230]: 2025-11-25 20:53:47.623280175 +0000 UTC m=+0.045023137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:53:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd4d9bc1dd62ab02863403479e43c302b2effed128457da17d13e38474ece30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd4d9bc1dd62ab02863403479e43c302b2effed128457da17d13e38474ece30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd4d9bc1dd62ab02863403479e43c302b2effed128457da17d13e38474ece30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd4d9bc1dd62ab02863403479e43c302b2effed128457da17d13e38474ece30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:47 compute-0 podman[279230]: 2025-11-25 20:53:47.767494519 +0000 UTC m=+0.189237471 container init b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:53:47 compute-0 podman[279230]: 2025-11-25 20:53:47.781334316 +0000 UTC m=+0.203077238 container start b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bohr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 20:53:47 compute-0 podman[279230]: 2025-11-25 20:53:47.785222523 +0000 UTC m=+0.206965485 container attach b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 20:53:47 compute-0 ceph-mon[75144]: pgmap v1464: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1465: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:48 compute-0 elastic_bohr[279247]: {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:     "0": [
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:         {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "devices": [
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "/dev/loop3"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             ],
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_name": "ceph_lv0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_size": "21470642176",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "name": "ceph_lv0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "tags": {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cluster_name": "ceph",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.crush_device_class": "",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.encrypted": "0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osd_id": "0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.type": "block",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.vdo": "0"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             },
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "type": "block",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "vg_name": "ceph_vg0"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:         }
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:     ],
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:     "1": [
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:         {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "devices": [
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "/dev/loop4"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             ],
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_name": "ceph_lv1",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_size": "21470642176",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "name": "ceph_lv1",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "tags": {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cluster_name": "ceph",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.crush_device_class": "",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.encrypted": "0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osd_id": "1",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.type": "block",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.vdo": "0"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             },
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "type": "block",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "vg_name": "ceph_vg1"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:         }
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:     ],
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:     "2": [
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:         {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "devices": [
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "/dev/loop5"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             ],
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_name": "ceph_lv2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_size": "21470642176",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "name": "ceph_lv2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "tags": {
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.cluster_name": "ceph",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.crush_device_class": "",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.encrypted": "0",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osd_id": "2",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.type": "block",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:                 "ceph.vdo": "0"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             },
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "type": "block",
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:             "vg_name": "ceph_vg2"
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:         }
Nov 25 20:53:48 compute-0 elastic_bohr[279247]:     ]
Nov 25 20:53:48 compute-0 elastic_bohr[279247]: }
Nov 25 20:53:48 compute-0 systemd[1]: libpod-b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e.scope: Deactivated successfully.
Nov 25 20:53:48 compute-0 podman[279230]: 2025-11-25 20:53:48.56697132 +0000 UTC m=+0.988714242 container died b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bohr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fd4d9bc1dd62ab02863403479e43c302b2effed128457da17d13e38474ece30-merged.mount: Deactivated successfully.
Nov 25 20:53:48 compute-0 podman[279230]: 2025-11-25 20:53:48.64120706 +0000 UTC m=+1.062949982 container remove b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:53:48 compute-0 systemd[1]: libpod-conmon-b903d818bd67a300146dc0d2f27df96d15f1594cc893dbeff12de7d13100607e.scope: Deactivated successfully.
Nov 25 20:53:48 compute-0 sudo[279105]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:48 compute-0 sudo[279268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:48 compute-0 sudo[279268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:48 compute-0 sudo[279268]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:48 compute-0 sudo[279293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:53:48 compute-0 sudo[279293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:48 compute-0 sudo[279293]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:53:48.973 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:53:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:53:48.974 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:53:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:53:48.974 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:53:49 compute-0 sudo[279318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:49 compute-0 sudo[279318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:49 compute-0 sudo[279318]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:49 compute-0 sudo[279343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:53:49 compute-0 sudo[279343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:49 compute-0 podman[279367]: 2025-11-25 20:53:49.252326715 +0000 UTC m=+0.105733230 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.574289378 +0000 UTC m=+0.075109526 container create 2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:53:49 compute-0 systemd[1]: Started libpod-conmon-2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f.scope.
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.543227002 +0000 UTC m=+0.044047200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:53:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.682989377 +0000 UTC m=+0.183809575 container init 2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.696358731 +0000 UTC m=+0.197178879 container start 2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.703356781 +0000 UTC m=+0.204176899 container attach 2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:53:49 compute-0 ecstatic_albattani[279446]: 167 167
Nov 25 20:53:49 compute-0 systemd[1]: libpod-2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f.scope: Deactivated successfully.
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.706590039 +0000 UTC m=+0.207410227 container died 2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_albattani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 20:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-25df873e330e08204b370b5750c475638934320dc1a8898bed42d3757d44ad1d-merged.mount: Deactivated successfully.
Nov 25 20:53:49 compute-0 podman[279430]: 2025-11-25 20:53:49.770779056 +0000 UTC m=+0.271599174 container remove 2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:53:49 compute-0 systemd[1]: libpod-conmon-2122c76026cb6963d1f71c26242cae8a09e12b3402f94011f35a15671ecd294f.scope: Deactivated successfully.
Nov 25 20:53:49 compute-0 ceph-mon[75144]: pgmap v1465: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:50 compute-0 podman[279472]: 2025-11-25 20:53:50.013714428 +0000 UTC m=+0.062173033 container create c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 20:53:50 compute-0 systemd[1]: Started libpod-conmon-c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41.scope.
Nov 25 20:53:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1466: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:50 compute-0 podman[279472]: 2025-11-25 20:53:49.985315086 +0000 UTC m=+0.033773771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:53:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915478be0b5f3fe136d9f07015e97450a5918e35162a7f8cf956ac16fa41b569/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915478be0b5f3fe136d9f07015e97450a5918e35162a7f8cf956ac16fa41b569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915478be0b5f3fe136d9f07015e97450a5918e35162a7f8cf956ac16fa41b569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915478be0b5f3fe136d9f07015e97450a5918e35162a7f8cf956ac16fa41b569/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:53:50 compute-0 podman[279472]: 2025-11-25 20:53:50.116416824 +0000 UTC m=+0.164875439 container init c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:53:50 compute-0 podman[279472]: 2025-11-25 20:53:50.131409542 +0000 UTC m=+0.179868187 container start c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:53:50 compute-0 podman[279472]: 2025-11-25 20:53:50.135993377 +0000 UTC m=+0.184452002 container attach c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 20:53:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:51 compute-0 nova_compute[248866]: 2025-11-25 20:53:51.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:51 compute-0 jolly_hertz[279489]: {
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "osd_id": 2,
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "type": "bluestore"
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:     },
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "osd_id": 1,
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "type": "bluestore"
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:     },
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "osd_id": 0,
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:         "type": "bluestore"
Nov 25 20:53:51 compute-0 jolly_hertz[279489]:     }
Nov 25 20:53:51 compute-0 jolly_hertz[279489]: }
Nov 25 20:53:51 compute-0 systemd[1]: libpod-c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41.scope: Deactivated successfully.
Nov 25 20:53:51 compute-0 podman[279472]: 2025-11-25 20:53:51.242976346 +0000 UTC m=+1.291434991 container died c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:53:51 compute-0 systemd[1]: libpod-c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41.scope: Consumed 1.121s CPU time.
Nov 25 20:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-915478be0b5f3fe136d9f07015e97450a5918e35162a7f8cf956ac16fa41b569-merged.mount: Deactivated successfully.
Nov 25 20:53:51 compute-0 podman[279472]: 2025-11-25 20:53:51.321257427 +0000 UTC m=+1.369716072 container remove c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hertz, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:53:51 compute-0 systemd[1]: libpod-conmon-c5c4ba27800392c0a65677c1ca8a864696ffe00061da1fe716ebb46ce5892e41.scope: Deactivated successfully.
Nov 25 20:53:51 compute-0 sudo[279343]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:53:51 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:53:51 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:53:51 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:53:51 compute-0 sudo[279533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:53:51 compute-0 sudo[279533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:51 compute-0 sudo[279533]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:51 compute-0 sudo[279558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:53:51 compute-0 sudo[279558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:53:51 compute-0 sudo[279558]: pam_unix(sudo:session): session closed for user root
Nov 25 20:53:51 compute-0 ceph-mon[75144]: pgmap v1466: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:51 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:53:51 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:53:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1467: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:53 compute-0 ceph-mon[75144]: pgmap v1467: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:54 compute-0 nova_compute[248866]: 2025-11-25 20:53:54.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1468: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:55 compute-0 nova_compute[248866]: 2025-11-25 20:53:55.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:53:55 compute-0 ceph-mon[75144]: pgmap v1468: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1469: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:53:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:53:57 compute-0 podman[279583]: 2025-11-25 20:53:57.077369464 +0000 UTC m=+0.158034782 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:53:57
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'images', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:53:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:53:57 compute-0 ceph-mon[75144]: pgmap v1469: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:58 compute-0 nova_compute[248866]: 2025-11-25 20:53:58.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:53:58 compute-0 nova_compute[248866]: 2025-11-25 20:53:58.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:53:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1470: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:53:59 compute-0 ceph-mon[75144]: pgmap v1470: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:00 compute-0 nova_compute[248866]: 2025-11-25 20:54:00.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:00 compute-0 nova_compute[248866]: 2025-11-25 20:54:00.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1471: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:00 compute-0 ceph-mon[75144]: pgmap v1471: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1472: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.071 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.072 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.072 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.072 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.073 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:54:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:54:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:54:02 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3722839070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.597 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.865 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.868 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5251MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.868 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.869 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.956 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:54:02 compute-0 nova_compute[248866]: 2025-11-25 20:54:02.957 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:54:03 compute-0 nova_compute[248866]: 2025-11-25 20:54:03.026 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:54:03 compute-0 ceph-mon[75144]: pgmap v1472: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:03 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3722839070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:54:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:54:03 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119178262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:54:03 compute-0 nova_compute[248866]: 2025-11-25 20:54:03.532 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:54:03 compute-0 nova_compute[248866]: 2025-11-25 20:54:03.541 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:54:03 compute-0 nova_compute[248866]: 2025-11-25 20:54:03.567 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:54:03 compute-0 nova_compute[248866]: 2025-11-25 20:54:03.569 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:54:03 compute-0 nova_compute[248866]: 2025-11-25 20:54:03.570 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:54:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1473: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:04 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/119178262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:54:05 compute-0 ceph-mon[75144]: pgmap v1473: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:05 compute-0 nova_compute[248866]: 2025-11-25 20:54:05.571 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:05 compute-0 nova_compute[248866]: 2025-11-25 20:54:05.572 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:54:05 compute-0 nova_compute[248866]: 2025-11-25 20:54:05.572 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:54:05 compute-0 nova_compute[248866]: 2025-11-25 20:54:05.590 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:54:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1474: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:07 compute-0 ceph-mon[75144]: pgmap v1474: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1475: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:09 compute-0 ceph-mon[75144]: pgmap v1475: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1476: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:11 compute-0 ceph-mon[75144]: pgmap v1476: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1477: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:13 compute-0 nova_compute[248866]: 2025-11-25 20:54:13.057 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:13 compute-0 ceph-mon[75144]: pgmap v1477: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1478: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:15 compute-0 ceph-mon[75144]: pgmap v1478: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1479: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:54:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623078469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:54:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:54:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623078469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:54:17 compute-0 ceph-mon[75144]: pgmap v1479: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2623078469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:54:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2623078469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:54:18 compute-0 podman[279654]: 2025-11-25 20:54:18.014548609 +0000 UTC m=+0.103524899 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:54:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1480: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:19 compute-0 ceph-mon[75144]: pgmap v1480: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:19 compute-0 podman[279673]: 2025-11-25 20:54:19.978654657 +0000 UTC m=+0.079145895 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 20:54:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1481: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:21 compute-0 ceph-mon[75144]: pgmap v1481: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1482: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:23 compute-0 ceph-mon[75144]: pgmap v1482: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1483: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:25 compute-0 ceph-mon[75144]: pgmap v1483: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1484: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:54:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:54:27 compute-0 ceph-mon[75144]: pgmap v1484: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1485: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:28 compute-0 podman[279693]: 2025-11-25 20:54:28.09210515 +0000 UTC m=+0.185445379 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 20:54:29 compute-0 ceph-mon[75144]: pgmap v1485: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1486: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:31 compute-0 ceph-mon[75144]: pgmap v1486: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1487: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:33 compute-0 ceph-mon[75144]: pgmap v1487: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1488: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:35 compute-0 ceph-mon[75144]: pgmap v1488: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1489: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:37 compute-0 ceph-mon[75144]: pgmap v1489: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1490: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:39 compute-0 ceph-mon[75144]: pgmap v1490: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1491: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:41 compute-0 ceph-mon[75144]: pgmap v1491: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1492: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:43 compute-0 ceph-mon[75144]: pgmap v1492: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1493: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:45 compute-0 ceph-mon[75144]: pgmap v1493: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1494: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:47 compute-0 ceph-mon[75144]: pgmap v1494: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1495: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:54:48.974 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:54:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:54:48.974 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:54:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:54:48.975 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:54:48 compute-0 podman[279720]: 2025-11-25 20:54:48.988250908 +0000 UTC m=+0.082122136 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 20:54:49 compute-0 ceph-mon[75144]: pgmap v1495: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1496: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:50 compute-0 podman[279740]: 2025-11-25 20:54:50.998891614 +0000 UTC m=+0.089266691 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:54:51 compute-0 ceph-mon[75144]: pgmap v1496: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:51 compute-0 sudo[279761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:51 compute-0 sudo[279761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:51 compute-0 sudo[279761]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:51 compute-0 sudo[279786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:54:51 compute-0 sudo[279786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:51 compute-0 sudo[279786]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:51 compute-0 sudo[279811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:51 compute-0 sudo[279811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:51 compute-0 sudo[279811]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:51 compute-0 sudo[279836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:54:51 compute-0 sudo[279836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:52 compute-0 nova_compute[248866]: 2025-11-25 20:54:52.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1497: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:52 compute-0 sudo[279836]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:54:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:54:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:54:52 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:54:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:54:52 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:54:52 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 173d420f-375b-496d-b3bf-caa9e656b9ba does not exist
Nov 25 20:54:52 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev bb495964-4679-4f8e-99cb-fdef25242f8c does not exist
Nov 25 20:54:52 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5e38ef06-058b-47f5-9553-54bd265756db does not exist
Nov 25 20:54:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:54:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:54:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:54:52 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:54:52 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:54:52 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:54:52 compute-0 sudo[279892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:52 compute-0 sudo[279892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:52 compute-0 sudo[279892]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:52 compute-0 sudo[279917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:54:52 compute-0 sudo[279917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:52 compute-0 sudo[279917]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:52 compute-0 sudo[279942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:52 compute-0 sudo[279942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:52 compute-0 sudo[279942]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:53 compute-0 sudo[279967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:54:53 compute-0 sudo[279967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:53 compute-0 ceph-mon[75144]: pgmap v1497: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:53 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:54:53 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:54:53 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:54:53 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:54:53 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:54:53 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.489987536 +0000 UTC m=+0.065977607 container create 278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:54:53 compute-0 systemd[1]: Started libpod-conmon-278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa.scope.
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.459015783 +0000 UTC m=+0.035005914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:54:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.595339484 +0000 UTC m=+0.171329555 container init 278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.6077225 +0000 UTC m=+0.183712571 container start 278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.61285373 +0000 UTC m=+0.188843801 container attach 278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_turing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 20:54:53 compute-0 sharp_turing[280048]: 167 167
Nov 25 20:54:53 compute-0 systemd[1]: libpod-278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa.scope: Deactivated successfully.
Nov 25 20:54:53 compute-0 conmon[280048]: conmon 278a62dd42c9dcbfcb86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa.scope/container/memory.events
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.619029188 +0000 UTC m=+0.195019259 container died 278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_turing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:54:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-977ad965d65ba97269df98c7526b1f1dbb503bb36699f7ce4058780d15073257-merged.mount: Deactivated successfully.
Nov 25 20:54:53 compute-0 podman[280032]: 2025-11-25 20:54:53.676295366 +0000 UTC m=+0.252285437 container remove 278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 20:54:53 compute-0 systemd[1]: libpod-conmon-278a62dd42c9dcbfcb86b8866c9c9491e6fb4307ae4a0c94f36e82643f6198aa.scope: Deactivated successfully.
Nov 25 20:54:53 compute-0 podman[280071]: 2025-11-25 20:54:53.905228837 +0000 UTC m=+0.052508810 container create 50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euclid, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:54:53 compute-0 systemd[1]: Started libpod-conmon-50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b.scope.
Nov 25 20:54:53 compute-0 podman[280071]: 2025-11-25 20:54:53.876881446 +0000 UTC m=+0.024161459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:54:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6a630a11b16d1649751258c333ac4ded971d891469d02a8c7142901ea0ad7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6a630a11b16d1649751258c333ac4ded971d891469d02a8c7142901ea0ad7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6a630a11b16d1649751258c333ac4ded971d891469d02a8c7142901ea0ad7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6a630a11b16d1649751258c333ac4ded971d891469d02a8c7142901ea0ad7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6a630a11b16d1649751258c333ac4ded971d891469d02a8c7142901ea0ad7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:54 compute-0 podman[280071]: 2025-11-25 20:54:54.038184237 +0000 UTC m=+0.185464260 container init 50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:54:54 compute-0 podman[280071]: 2025-11-25 20:54:54.049669909 +0000 UTC m=+0.196949882 container start 50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:54:54 compute-0 podman[280071]: 2025-11-25 20:54:54.05339635 +0000 UTC m=+0.200676373 container attach 50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euclid, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 20:54:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1498: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:55 compute-0 ceph-mon[75144]: pgmap v1498: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:55 compute-0 tender_euclid[280088]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:54:55 compute-0 tender_euclid[280088]: --> relative data size: 1.0
Nov 25 20:54:55 compute-0 tender_euclid[280088]: --> All data devices are unavailable
Nov 25 20:54:55 compute-0 systemd[1]: libpod-50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b.scope: Deactivated successfully.
Nov 25 20:54:55 compute-0 systemd[1]: libpod-50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b.scope: Consumed 1.294s CPU time.
Nov 25 20:54:55 compute-0 podman[280071]: 2025-11-25 20:54:55.407710742 +0000 UTC m=+1.554990735 container died 50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a6a630a11b16d1649751258c333ac4ded971d891469d02a8c7142901ea0ad7c-merged.mount: Deactivated successfully.
Nov 25 20:54:55 compute-0 podman[280071]: 2025-11-25 20:54:55.483365221 +0000 UTC m=+1.630645164 container remove 50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 20:54:55 compute-0 systemd[1]: libpod-conmon-50a1616c3e59379a3bbe3865ffdacfe61fc76708738668a8f626a54222659f5b.scope: Deactivated successfully.
Nov 25 20:54:55 compute-0 sudo[279967]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:55 compute-0 sudo[280131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:55 compute-0 sudo[280131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:55 compute-0 sudo[280131]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:55 compute-0 sudo[280156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:54:55 compute-0 sudo[280156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:55 compute-0 sudo[280156]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:54:55 compute-0 sudo[280181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:55 compute-0 sudo[280181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:55 compute-0 sudo[280181]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:55 compute-0 sudo[280206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:54:55 compute-0 sudo[280206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:56 compute-0 nova_compute[248866]: 2025-11-25 20:54:56.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:56 compute-0 nova_compute[248866]: 2025-11-25 20:54:56.045 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1499: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.247790577 +0000 UTC m=+0.044084290 container create d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:54:56 compute-0 systemd[1]: Started libpod-conmon-d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c.scope.
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.228046359 +0000 UTC m=+0.024340062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:54:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.343591795 +0000 UTC m=+0.139885508 container init d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.355007986 +0000 UTC m=+0.151301709 container start d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.360253658 +0000 UTC m=+0.156547371 container attach d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:54:56 compute-0 thirsty_shirley[280287]: 167 167
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.362716775 +0000 UTC m=+0.159010458 container died d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 20:54:56 compute-0 systemd[1]: libpod-d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c.scope: Deactivated successfully.
Nov 25 20:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-24c0047587472349a5205ca706bec330784b7f06d554436d14268559969c6b1e-merged.mount: Deactivated successfully.
Nov 25 20:54:56 compute-0 podman[280271]: 2025-11-25 20:54:56.405951052 +0000 UTC m=+0.202244765 container remove d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 20:54:56 compute-0 systemd[1]: libpod-conmon-d6084ecb4aae041cacaae8aa7c8e8093176683afc410d5996424085d0d09eb8c.scope: Deactivated successfully.
Nov 25 20:54:56 compute-0 podman[280310]: 2025-11-25 20:54:56.625387404 +0000 UTC m=+0.069047550 container create d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:54:56 compute-0 systemd[1]: Started libpod-conmon-d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7.scope.
Nov 25 20:54:56 compute-0 podman[280310]: 2025-11-25 20:54:56.597181887 +0000 UTC m=+0.040842083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:54:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb7aa290ca0e57afdc6fa2afbc178deaf2eae842f58a026c3a2bfa791b7cb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb7aa290ca0e57afdc6fa2afbc178deaf2eae842f58a026c3a2bfa791b7cb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb7aa290ca0e57afdc6fa2afbc178deaf2eae842f58a026c3a2bfa791b7cb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb7aa290ca0e57afdc6fa2afbc178deaf2eae842f58a026c3a2bfa791b7cb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:56 compute-0 podman[280310]: 2025-11-25 20:54:56.735524402 +0000 UTC m=+0.179184548 container init d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 20:54:56 compute-0 podman[280310]: 2025-11-25 20:54:56.752542176 +0000 UTC m=+0.196202312 container start d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 20:54:56 compute-0 podman[280310]: 2025-11-25 20:54:56.756479323 +0000 UTC m=+0.200139479 container attach d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:54:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:54:57
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images']
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:54:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:54:57 compute-0 ceph-mon[75144]: pgmap v1499: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]: {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:     "0": [
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:         {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "devices": [
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "/dev/loop3"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             ],
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_name": "ceph_lv0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_size": "21470642176",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "name": "ceph_lv0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "tags": {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cluster_name": "ceph",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.crush_device_class": "",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.encrypted": "0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osd_id": "0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.type": "block",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.vdo": "0"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             },
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "type": "block",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "vg_name": "ceph_vg0"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:         }
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:     ],
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:     "1": [
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:         {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "devices": [
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "/dev/loop4"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             ],
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_name": "ceph_lv1",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_size": "21470642176",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "name": "ceph_lv1",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "tags": {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cluster_name": "ceph",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.crush_device_class": "",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.encrypted": "0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osd_id": "1",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.type": "block",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.vdo": "0"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             },
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "type": "block",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "vg_name": "ceph_vg1"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:         }
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:     ],
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:     "2": [
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:         {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "devices": [
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "/dev/loop5"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             ],
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_name": "ceph_lv2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_size": "21470642176",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "name": "ceph_lv2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "tags": {
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.cluster_name": "ceph",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.crush_device_class": "",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.encrypted": "0",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osd_id": "2",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.type": "block",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:                 "ceph.vdo": "0"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             },
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "type": "block",
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:             "vg_name": "ceph_vg2"
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:         }
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]:     ]
Nov 25 20:54:57 compute-0 kind_ishizaka[280327]: }
Nov 25 20:54:57 compute-0 systemd[1]: libpod-d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7.scope: Deactivated successfully.
Nov 25 20:54:57 compute-0 podman[280310]: 2025-11-25 20:54:57.530029537 +0000 UTC m=+0.973689683 container died d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 20:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1afb7aa290ca0e57afdc6fa2afbc178deaf2eae842f58a026c3a2bfa791b7cb5-merged.mount: Deactivated successfully.
Nov 25 20:54:57 compute-0 podman[280310]: 2025-11-25 20:54:57.614943008 +0000 UTC m=+1.058603114 container remove d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_ishizaka, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 20:54:57 compute-0 systemd[1]: libpod-conmon-d9a0a8172f056ef5f18a28f128c830c069fa34355da0d98aa7ff0efd4843ded7.scope: Deactivated successfully.
Nov 25 20:54:57 compute-0 sudo[280206]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:57 compute-0 sudo[280349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:57 compute-0 sudo[280349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:57 compute-0 sudo[280349]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:57 compute-0 sudo[280374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:54:57 compute-0 sudo[280374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:57 compute-0 sudo[280374]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:57 compute-0 sudo[280399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:54:57 compute-0 sudo[280399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:57 compute-0 sudo[280399]: pam_unix(sudo:session): session closed for user root
Nov 25 20:54:58 compute-0 nova_compute[248866]: 2025-11-25 20:54:58.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:54:58 compute-0 nova_compute[248866]: 2025-11-25 20:54:58.045 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:54:58 compute-0 sudo[280424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:54:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1500: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:54:58 compute-0 sudo[280424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.547957792 +0000 UTC m=+0.053750464 container create ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:54:58 compute-0 systemd[1]: Started libpod-conmon-ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4.scope.
Nov 25 20:54:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.527351402 +0000 UTC m=+0.033144094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.637412537 +0000 UTC m=+0.143205269 container init ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.650130344 +0000 UTC m=+0.155923026 container start ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:54:58 compute-0 loving_nightingale[280507]: 167 167
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.65844547 +0000 UTC m=+0.164238152 container attach ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 20:54:58 compute-0 systemd[1]: libpod-ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4.scope: Deactivated successfully.
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.65917528 +0000 UTC m=+0.164967972 container died ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d79ca99b951fbc4abfd38404e750d799b64566c32765dfa8a62f6c3ee9623fd3-merged.mount: Deactivated successfully.
Nov 25 20:54:58 compute-0 podman[280490]: 2025-11-25 20:54:58.710585609 +0000 UTC m=+0.216378291 container remove ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:54:58 compute-0 systemd[1]: libpod-conmon-ba38a3c9746b18fb38214df51dad247013ab65fe34a790f9b92e78602e3ff1d4.scope: Deactivated successfully.
Nov 25 20:54:58 compute-0 podman[280504]: 2025-11-25 20:54:58.787378509 +0000 UTC m=+0.173866313 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 20:54:58 compute-0 podman[280555]: 2025-11-25 20:54:58.913754749 +0000 UTC m=+0.050970488 container create 0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:54:58 compute-0 systemd[1]: Started libpod-conmon-0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0.scope.
Nov 25 20:54:58 compute-0 podman[280555]: 2025-11-25 20:54:58.887897575 +0000 UTC m=+0.025113324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:54:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ad0798f0c8b8bfd4ed6789c3ad443c8f726135c1d1943662df98257d6206b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ad0798f0c8b8bfd4ed6789c3ad443c8f726135c1d1943662df98257d6206b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ad0798f0c8b8bfd4ed6789c3ad443c8f726135c1d1943662df98257d6206b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ad0798f0c8b8bfd4ed6789c3ad443c8f726135c1d1943662df98257d6206b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:54:59 compute-0 podman[280555]: 2025-11-25 20:54:59.037872097 +0000 UTC m=+0.175087836 container init 0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_galileo, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:54:59 compute-0 podman[280555]: 2025-11-25 20:54:59.052882956 +0000 UTC m=+0.190098705 container start 0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 25 20:54:59 compute-0 podman[280555]: 2025-11-25 20:54:59.057013028 +0000 UTC m=+0.194228787 container attach 0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_galileo, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 20:54:59 compute-0 ceph-mon[75144]: pgmap v1500: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1501: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:00 compute-0 happy_galileo[280572]: {
Nov 25 20:55:00 compute-0 happy_galileo[280572]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "osd_id": 2,
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "type": "bluestore"
Nov 25 20:55:00 compute-0 happy_galileo[280572]:     },
Nov 25 20:55:00 compute-0 happy_galileo[280572]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "osd_id": 1,
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "type": "bluestore"
Nov 25 20:55:00 compute-0 happy_galileo[280572]:     },
Nov 25 20:55:00 compute-0 happy_galileo[280572]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "osd_id": 0,
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:55:00 compute-0 happy_galileo[280572]:         "type": "bluestore"
Nov 25 20:55:00 compute-0 happy_galileo[280572]:     }
Nov 25 20:55:00 compute-0 happy_galileo[280572]: }
Nov 25 20:55:00 compute-0 systemd[1]: libpod-0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0.scope: Deactivated successfully.
Nov 25 20:55:00 compute-0 podman[280555]: 2025-11-25 20:55:00.149064561 +0000 UTC m=+1.286280310 container died 0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:55:00 compute-0 systemd[1]: libpod-0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0.scope: Consumed 1.107s CPU time.
Nov 25 20:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9ad0798f0c8b8bfd4ed6789c3ad443c8f726135c1d1943662df98257d6206b2-merged.mount: Deactivated successfully.
Nov 25 20:55:00 compute-0 podman[280555]: 2025-11-25 20:55:00.222773277 +0000 UTC m=+1.359989026 container remove 0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:55:00 compute-0 systemd[1]: libpod-conmon-0fb85e0ca6f46a0604a50735d8bb98ec4783973f8695a998d164291cf9e9d6d0.scope: Deactivated successfully.
Nov 25 20:55:00 compute-0 sudo[280424]: pam_unix(sudo:session): session closed for user root
Nov 25 20:55:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:55:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:55:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:55:00 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:55:00 compute-0 sudo[280617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:55:00 compute-0 sudo[280617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:55:00 compute-0 sudo[280617]: pam_unix(sudo:session): session closed for user root
Nov 25 20:55:00 compute-0 sudo[280642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:55:00 compute-0 sudo[280642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:55:00 compute-0 sudo[280642]: pam_unix(sudo:session): session closed for user root
Nov 25 20:55:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:01 compute-0 nova_compute[248866]: 2025-11-25 20:55:01.048 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:01 compute-0 ceph-mon[75144]: pgmap v1501: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:55:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.037 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.067 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.068 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.068 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.068 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.068 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1502: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:55:02 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4237032050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:55:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.508 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.761 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.764 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5251MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.764 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.765 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.846 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.847 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:55:02 compute-0 nova_compute[248866]: 2025-11-25 20:55:02.863 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:55:03 compute-0 ceph-mon[75144]: pgmap v1502: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:03 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/4237032050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:55:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:55:03 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231413527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:55:03 compute-0 nova_compute[248866]: 2025-11-25 20:55:03.338 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:55:03 compute-0 nova_compute[248866]: 2025-11-25 20:55:03.347 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:55:03 compute-0 nova_compute[248866]: 2025-11-25 20:55:03.366 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:55:03 compute-0 nova_compute[248866]: 2025-11-25 20:55:03.368 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:55:03 compute-0 nova_compute[248866]: 2025-11-25 20:55:03.369 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:55:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1503: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:04 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3231413527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:55:05 compute-0 ceph-mon[75144]: pgmap v1503: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:05 compute-0 nova_compute[248866]: 2025-11-25 20:55:05.371 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:05 compute-0 nova_compute[248866]: 2025-11-25 20:55:05.371 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:55:05 compute-0 nova_compute[248866]: 2025-11-25 20:55:05.371 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:55:05 compute-0 nova_compute[248866]: 2025-11-25 20:55:05.385 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:55:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1504: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:07 compute-0 ceph-mon[75144]: pgmap v1504: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1505: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:09 compute-0 ceph-mon[75144]: pgmap v1505: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1506: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:55:11 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 6906 writes, 30K keys, 6906 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 6906 writes, 6906 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1294 writes, 5646 keys, 1294 commit groups, 1.0 writes per commit group, ingest: 5.69 MB, 0.01 MB/s
                                           Interval WAL: 1294 writes, 1294 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.6      0.25              0.12        18    0.014       0      0       0.0       0.0
                                             L6      1/0    5.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    108.3     87.6      0.92              0.42        17    0.054     69K   9741       0.0       0.0
                                            Sum      1/0    5.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     85.0     89.5      1.17              0.54        35    0.033     69K   9741       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    119.4    119.6      0.15              0.09         6    0.025     14K   1944       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    108.3     87.6      0.92              0.42        17    0.054     69K   9741       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     97.6      0.25              0.12        17    0.015       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.003
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.03 MB/s write, 0.10 GB read, 0.03 MB/s read, 1.2 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585aba031f0#2 capacity: 308.00 MB usage: 12.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000196 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1135,11.72 MB,3.80489%) FilterBlock(36,196.61 KB,0.0623381%) IndexBlock(36,342.08 KB,0.108461%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 20:55:11 compute-0 ceph-mon[75144]: pgmap v1506: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1507: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:13 compute-0 ceph-mon[75144]: pgmap v1507: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1508: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:15 compute-0 ceph-mon[75144]: pgmap v1508: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1509: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:55:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2101767765' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:55:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:55:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2101767765' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:55:17 compute-0 ceph-mon[75144]: pgmap v1509: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2101767765' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:55:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/2101767765' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:55:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1510: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:19 compute-0 ceph-mon[75144]: pgmap v1510: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:20 compute-0 podman[280711]: 2025-11-25 20:55:20.002554748 +0000 UTC m=+0.097622329 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:55:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1511: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:21 compute-0 ceph-mon[75144]: pgmap v1511: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:22 compute-0 podman[280732]: 2025-11-25 20:55:22.028506149 +0000 UTC m=+0.122063141 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:55:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1512: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:23 compute-0 ceph-mon[75144]: pgmap v1512: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1513: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:25 compute-0 ceph-mon[75144]: pgmap v1513: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1514: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:55:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:55:27 compute-0 ceph-mon[75144]: pgmap v1514: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1515: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:29 compute-0 podman[280752]: 2025-11-25 20:55:29.049070124 +0000 UTC m=+0.140489202 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:55:29 compute-0 ceph-mon[75144]: pgmap v1515: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1516: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:31 compute-0 ceph-mon[75144]: pgmap v1516: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1517: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:33 compute-0 ceph-mon[75144]: pgmap v1517: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1518: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:35 compute-0 ceph-mon[75144]: pgmap v1518: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1519: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:37 compute-0 ceph-mon[75144]: pgmap v1519: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1520: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:38 compute-0 nova_compute[248866]: 2025-11-25 20:55:38.553 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:39 compute-0 ceph-mon[75144]: pgmap v1520: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:40 compute-0 nova_compute[248866]: 2025-11-25 20:55:40.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:40 compute-0 nova_compute[248866]: 2025-11-25 20:55:40.042 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 20:55:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1521: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:41 compute-0 ceph-mon[75144]: pgmap v1521: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1522: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:43 compute-0 ceph-mon[75144]: pgmap v1522: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1523: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:45 compute-0 ceph-mon[75144]: pgmap v1523: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1524: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:47 compute-0 ceph-mon[75144]: pgmap v1524: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1525: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:55:48.975 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:55:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:55:48.975 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:55:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:55:48.976 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:55:49 compute-0 ceph-mon[75144]: pgmap v1525: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1526: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:50 compute-0 podman[280778]: 2025-11-25 20:55:50.990950968 +0000 UTC m=+0.081004228 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:55:51 compute-0 ceph-mon[75144]: pgmap v1526: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1527: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:53 compute-0 podman[280797]: 2025-11-25 20:55:53.000345299 +0000 UTC m=+0.093837995 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:55:53 compute-0 nova_compute[248866]: 2025-11-25 20:55:53.058 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:53 compute-0 ceph-mon[75144]: pgmap v1527: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1528: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:55 compute-0 ceph-mon[75144]: pgmap v1528: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1529: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:55:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:55:57 compute-0 nova_compute[248866]: 2025-11-25 20:55:57.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:55:57
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta']
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:55:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:55:57 compute-0 ceph-mon[75144]: pgmap v1529: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:58 compute-0 nova_compute[248866]: 2025-11-25 20:55:58.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:58 compute-0 nova_compute[248866]: 2025-11-25 20:55:58.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:55:58 compute-0 nova_compute[248866]: 2025-11-25 20:55:58.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:55:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1530: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:55:59 compute-0 ceph-mon[75144]: pgmap v1530: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:00 compute-0 podman[280819]: 2025-11-25 20:56:00.091113959 +0000 UTC m=+0.185459732 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 20:56:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1531: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:00 compute-0 sudo[280845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:00 compute-0 sudo[280845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:00 compute-0 sudo[280845]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:00 compute-0 sudo[280870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:56:00 compute-0 sudo[280870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:00 compute-0 sudo[280870]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:00 compute-0 sudo[280895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:00 compute-0 sudo[280895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:00 compute-0 sudo[280895]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:00 compute-0 sudo[280920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:56:00 compute-0 sudo[280920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:01 compute-0 sudo[280920]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:56:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:56:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:56:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:56:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:56:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:56:01 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 42d7990a-7586-4065-bfa8-7e1baffa7a9a does not exist
Nov 25 20:56:01 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 7118b78b-4595-46ae-9d8a-74f4b0c000b9 does not exist
Nov 25 20:56:01 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 5617a793-3b80-4185-87d9-d51f6f34f173 does not exist
Nov 25 20:56:01 compute-0 ceph-mon[75144]: pgmap v1531: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:56:01 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:56:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:56:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:56:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:56:01 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:56:01 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:56:01 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:56:01 compute-0 sudo[280976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:01 compute-0 sudo[280976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:01 compute-0 sudo[280976]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:01 compute-0 sudo[281001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:56:01 compute-0 sudo[281001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:01 compute-0 sudo[281001]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:01 compute-0 sudo[281026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:01 compute-0 sudo[281026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:01 compute-0 sudo[281026]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:01 compute-0 sudo[281051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:56:01 compute-0 sudo[281051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.075 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.076 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.076 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.077 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.077 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1532: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.364076511 +0000 UTC m=+0.050414709 container create fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 20:56:02 compute-0 systemd[1]: Started libpod-conmon-fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f.scope.
Nov 25 20:56:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.336979306 +0000 UTC m=+0.023317564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.449146118 +0000 UTC m=+0.135484396 container init fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.460606029 +0000 UTC m=+0.146944277 container start fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.464255617 +0000 UTC m=+0.150593895 container attach fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:56:02 compute-0 vigorous_sutherland[281150]: 167 167
Nov 25 20:56:02 compute-0 systemd[1]: libpod-fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f.scope: Deactivated successfully.
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.470400905 +0000 UTC m=+0.156739143 container died fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 20:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af0aff01c6345947f2b45eea575002478b0b971cbe5fb84648119d21f141e93-merged.mount: Deactivated successfully.
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:56:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:56:02 compute-0 podman[281136]: 2025-11-25 20:56:02.520636138 +0000 UTC m=+0.206974376 container remove fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:56:02 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:56:02 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1957684123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:56:02 compute-0 systemd[1]: libpod-conmon-fd93dff5628395d97e467072e5efe9ee4cc88c8cbc583f3ccf908de5a9352a5f.scope: Deactivated successfully.
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.564 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:56:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:56:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:56:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:56:02 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:56:02 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1957684123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:56:02 compute-0 podman[281176]: 2025-11-25 20:56:02.714587478 +0000 UTC m=+0.037321744 container create 7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:56:02 compute-0 systemd[1]: Started libpod-conmon-7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d.scope.
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.765 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.767 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5286MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.767 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.767 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:56:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5495fc15c17dad86ef8ea5cd2dc3a880639abc588d649139a71699b8f79a20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5495fc15c17dad86ef8ea5cd2dc3a880639abc588d649139a71699b8f79a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5495fc15c17dad86ef8ea5cd2dc3a880639abc588d649139a71699b8f79a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5495fc15c17dad86ef8ea5cd2dc3a880639abc588d649139a71699b8f79a20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5495fc15c17dad86ef8ea5cd2dc3a880639abc588d649139a71699b8f79a20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:02 compute-0 podman[281176]: 2025-11-25 20:56:02.7943165 +0000 UTC m=+0.117050796 container init 7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 20:56:02 compute-0 podman[281176]: 2025-11-25 20:56:02.698686336 +0000 UTC m=+0.021420632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:56:02 compute-0 podman[281176]: 2025-11-25 20:56:02.808451243 +0000 UTC m=+0.131185509 container start 7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:56:02 compute-0 podman[281176]: 2025-11-25 20:56:02.812514834 +0000 UTC m=+0.135249190 container attach 7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.845 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.846 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:56:02 compute-0 nova_compute[248866]: 2025-11-25 20:56:02.865 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:56:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:56:03 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146429630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.327 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.336 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.360 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.363 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.363 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.364 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.365 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 20:56:03 compute-0 nova_compute[248866]: 2025-11-25 20:56:03.385 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 20:56:03 compute-0 ceph-mon[75144]: pgmap v1532: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:03 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/146429630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:56:03 compute-0 elastic_darwin[281192]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:56:03 compute-0 elastic_darwin[281192]: --> relative data size: 1.0
Nov 25 20:56:03 compute-0 elastic_darwin[281192]: --> All data devices are unavailable
Nov 25 20:56:03 compute-0 systemd[1]: libpod-7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d.scope: Deactivated successfully.
Nov 25 20:56:03 compute-0 systemd[1]: libpod-7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d.scope: Consumed 1.094s CPU time.
Nov 25 20:56:03 compute-0 podman[281176]: 2025-11-25 20:56:03.954096978 +0000 UTC m=+1.276831324 container died 7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd5495fc15c17dad86ef8ea5cd2dc3a880639abc588d649139a71699b8f79a20-merged.mount: Deactivated successfully.
Nov 25 20:56:04 compute-0 podman[281176]: 2025-11-25 20:56:04.026577864 +0000 UTC m=+1.349312140 container remove 7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:56:04 compute-0 systemd[1]: libpod-conmon-7d8257da4f59e03b179d0e74810861a0cefccefad1409bb8d20856f8fa09a86d.scope: Deactivated successfully.
Nov 25 20:56:04 compute-0 sudo[281051]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:04 compute-0 sudo[281258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:04 compute-0 sudo[281258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:04 compute-0 sudo[281258]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:04 compute-0 sudo[281283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:56:04 compute-0 sudo[281283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:04 compute-0 sudo[281283]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1533: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:04 compute-0 nova_compute[248866]: 2025-11-25 20:56:04.385 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:04 compute-0 nova_compute[248866]: 2025-11-25 20:56:04.385 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:04 compute-0 sudo[281308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:04 compute-0 sudo[281308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:04 compute-0 sudo[281308]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:04 compute-0 sudo[281333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:56:04 compute-0 sudo[281333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:04 compute-0 podman[281397]: 2025-11-25 20:56:04.915837715 +0000 UTC m=+0.064946203 container create 358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:56:04 compute-0 systemd[1]: Started libpod-conmon-358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0.scope.
Nov 25 20:56:04 compute-0 podman[281397]: 2025-11-25 20:56:04.889289904 +0000 UTC m=+0.038398442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:56:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:56:05 compute-0 podman[281397]: 2025-11-25 20:56:05.016054993 +0000 UTC m=+0.165163491 container init 358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:56:05 compute-0 podman[281397]: 2025-11-25 20:56:05.026825855 +0000 UTC m=+0.175934343 container start 358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 20:56:05 compute-0 podman[281397]: 2025-11-25 20:56:05.031187674 +0000 UTC m=+0.180296162 container attach 358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:56:05 compute-0 clever_mirzakhani[281413]: 167 167
Nov 25 20:56:05 compute-0 systemd[1]: libpod-358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0.scope: Deactivated successfully.
Nov 25 20:56:05 compute-0 podman[281397]: 2025-11-25 20:56:05.034355379 +0000 UTC m=+0.183463887 container died 358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 20:56:05 compute-0 nova_compute[248866]: 2025-11-25 20:56:05.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:05 compute-0 nova_compute[248866]: 2025-11-25 20:56:05.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:56:05 compute-0 nova_compute[248866]: 2025-11-25 20:56:05.044 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:56:05 compute-0 nova_compute[248866]: 2025-11-25 20:56:05.060 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a582dcfd6a187b43c03fec9ae8fd33f2913519218995a6ab79520bdde24e2bd6-merged.mount: Deactivated successfully.
Nov 25 20:56:05 compute-0 podman[281397]: 2025-11-25 20:56:05.086668368 +0000 UTC m=+0.235776876 container remove 358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 20:56:05 compute-0 systemd[1]: libpod-conmon-358f3758e9598ae5978e6a58da0f6184296142eb953c52f668ce4994bcbbb3d0.scope: Deactivated successfully.
Nov 25 20:56:05 compute-0 podman[281437]: 2025-11-25 20:56:05.338599311 +0000 UTC m=+0.071906311 container create 5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elgamal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 20:56:05 compute-0 systemd[1]: Started libpod-conmon-5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650.scope.
Nov 25 20:56:05 compute-0 podman[281437]: 2025-11-25 20:56:05.311113146 +0000 UTC m=+0.044420146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:56:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c4108f5d76d774904cc799ccdbde05e10302e5f4feb3911906558cc46790bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c4108f5d76d774904cc799ccdbde05e10302e5f4feb3911906558cc46790bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c4108f5d76d774904cc799ccdbde05e10302e5f4feb3911906558cc46790bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c4108f5d76d774904cc799ccdbde05e10302e5f4feb3911906558cc46790bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:05 compute-0 podman[281437]: 2025-11-25 20:56:05.451538615 +0000 UTC m=+0.184845665 container init 5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:56:05 compute-0 podman[281437]: 2025-11-25 20:56:05.464093955 +0000 UTC m=+0.197400965 container start 5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 20:56:05 compute-0 podman[281437]: 2025-11-25 20:56:05.468540596 +0000 UTC m=+0.201847586 container attach 5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elgamal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 20:56:05 compute-0 ceph-mon[75144]: pgmap v1533: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]: {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:     "0": [
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:         {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "devices": [
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "/dev/loop3"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             ],
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_name": "ceph_lv0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_size": "21470642176",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "name": "ceph_lv0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "tags": {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cluster_name": "ceph",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.crush_device_class": "",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.encrypted": "0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osd_id": "0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.type": "block",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.vdo": "0"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             },
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "type": "block",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "vg_name": "ceph_vg0"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:         }
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:     ],
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:     "1": [
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:         {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "devices": [
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "/dev/loop4"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             ],
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_name": "ceph_lv1",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_size": "21470642176",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "name": "ceph_lv1",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "tags": {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cluster_name": "ceph",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.crush_device_class": "",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.encrypted": "0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osd_id": "1",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.type": "block",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.vdo": "0"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             },
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "type": "block",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "vg_name": "ceph_vg1"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:         }
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:     ],
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:     "2": [
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:         {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "devices": [
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "/dev/loop5"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             ],
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_name": "ceph_lv2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_size": "21470642176",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "name": "ceph_lv2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "tags": {
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.cluster_name": "ceph",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.crush_device_class": "",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.encrypted": "0",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osd_id": "2",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.type": "block",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:                 "ceph.vdo": "0"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             },
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "type": "block",
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:             "vg_name": "ceph_vg2"
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:         }
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]:     ]
Nov 25 20:56:06 compute-0 trusting_elgamal[281453]: }
Nov 25 20:56:06 compute-0 systemd[1]: libpod-5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650.scope: Deactivated successfully.
Nov 25 20:56:06 compute-0 podman[281437]: 2025-11-25 20:56:06.285628238 +0000 UTC m=+1.018935218 container died 5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elgamal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0c4108f5d76d774904cc799ccdbde05e10302e5f4feb3911906558cc46790bd-merged.mount: Deactivated successfully.
Nov 25 20:56:06 compute-0 podman[281437]: 2025-11-25 20:56:06.344134895 +0000 UTC m=+1.077441855 container remove 5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 20:56:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1534: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:06 compute-0 systemd[1]: libpod-conmon-5d49b51024efeb331728dc9186a18fadaf9e15d5e0cc011a8216558ead120650.scope: Deactivated successfully.
Nov 25 20:56:06 compute-0 sudo[281333]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:06 compute-0 sudo[281472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:06 compute-0 sudo[281472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:06 compute-0 sudo[281472]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:06 compute-0 sudo[281497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:56:06 compute-0 sudo[281497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:06 compute-0 sudo[281497]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:06 compute-0 sudo[281522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:06 compute-0 sudo[281522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:06 compute-0 sudo[281522]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:06 compute-0 sudo[281547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:56:06 compute-0 sudo[281547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.148099852 +0000 UTC m=+0.062067934 container create 712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yonath, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:56:07 compute-0 systemd[1]: Started libpod-conmon-712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32.scope.
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.119017284 +0000 UTC m=+0.032985416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:56:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.242246296 +0000 UTC m=+0.156214398 container init 712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yonath, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.252242168 +0000 UTC m=+0.166210240 container start 712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.256516273 +0000 UTC m=+0.170484405 container attach 712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:56:07 compute-0 sweet_yonath[281628]: 167 167
Nov 25 20:56:07 compute-0 systemd[1]: libpod-712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32.scope: Deactivated successfully.
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.25973257 +0000 UTC m=+0.173700652 container died 712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 20:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7564e3e9daeb8b9133af1ca9ade1b5921b2c60f7666710ef07735ff33854e914-merged.mount: Deactivated successfully.
Nov 25 20:56:07 compute-0 podman[281612]: 2025-11-25 20:56:07.310025544 +0000 UTC m=+0.223993616 container remove 712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:56:07 compute-0 systemd[1]: libpod-conmon-712f1b1bbeaebc39af09a6c01910cfccf6dbbf3ce39d2de85f8b297b8b90bf32.scope: Deactivated successfully.
Nov 25 20:56:07 compute-0 podman[281652]: 2025-11-25 20:56:07.564140927 +0000 UTC m=+0.067179593 container create b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shockley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:56:07 compute-0 systemd[1]: Started libpod-conmon-b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494.scope.
Nov 25 20:56:07 compute-0 ceph-mon[75144]: pgmap v1534: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:07 compute-0 podman[281652]: 2025-11-25 20:56:07.535282545 +0000 UTC m=+0.038321261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:56:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0254a87acb950063bc945b63309feeac0a05ea36748722c008b9c20c9e1a2dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0254a87acb950063bc945b63309feeac0a05ea36748722c008b9c20c9e1a2dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0254a87acb950063bc945b63309feeac0a05ea36748722c008b9c20c9e1a2dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0254a87acb950063bc945b63309feeac0a05ea36748722c008b9c20c9e1a2dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:56:07 compute-0 podman[281652]: 2025-11-25 20:56:07.681887301 +0000 UTC m=+0.184926037 container init b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 20:56:07 compute-0 podman[281652]: 2025-11-25 20:56:07.697787683 +0000 UTC m=+0.200826369 container start b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:56:07 compute-0 podman[281652]: 2025-11-25 20:56:07.702554701 +0000 UTC m=+0.205593367 container attach b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 20:56:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1535: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:08 compute-0 friendly_shockley[281670]: {
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "osd_id": 2,
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "type": "bluestore"
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:     },
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "osd_id": 1,
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "type": "bluestore"
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:     },
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "osd_id": 0,
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:         "type": "bluestore"
Nov 25 20:56:08 compute-0 friendly_shockley[281670]:     }
Nov 25 20:56:08 compute-0 friendly_shockley[281670]: }
Nov 25 20:56:08 compute-0 systemd[1]: libpod-b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494.scope: Deactivated successfully.
Nov 25 20:56:08 compute-0 systemd[1]: libpod-b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494.scope: Consumed 1.150s CPU time.
Nov 25 20:56:08 compute-0 podman[281703]: 2025-11-25 20:56:08.897464093 +0000 UTC m=+0.042398891 container died b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 20:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0254a87acb950063bc945b63309feeac0a05ea36748722c008b9c20c9e1a2dc-merged.mount: Deactivated successfully.
Nov 25 20:56:08 compute-0 podman[281703]: 2025-11-25 20:56:08.961939611 +0000 UTC m=+0.106874419 container remove b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shockley, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:56:08 compute-0 systemd[1]: libpod-conmon-b05cc0b1a82e822dd705c02215f3331cd2feef2930b70fa64f8f7b87d34b4494.scope: Deactivated successfully.
Nov 25 20:56:09 compute-0 sudo[281547]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:56:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:56:09 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:56:09 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:56:09 compute-0 sudo[281718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:56:09 compute-0 sudo[281718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:09 compute-0 sudo[281718]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:09 compute-0 sudo[281743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:56:09 compute-0 sudo[281743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:56:09 compute-0 sudo[281743]: pam_unix(sudo:session): session closed for user root
Nov 25 20:56:09 compute-0 nova_compute[248866]: 2025-11-25 20:56:09.529 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:09 compute-0 ceph-mon[75144]: pgmap v1535: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:56:09 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:56:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1536: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:11 compute-0 ceph-mon[75144]: pgmap v1536: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1537: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:13 compute-0 ceph-mon[75144]: pgmap v1537: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1538: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:15 compute-0 ceph-mon[75144]: pgmap v1538: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:16 compute-0 nova_compute[248866]: 2025-11-25 20:56:16.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1539: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:56:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/363162233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:56:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:56:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/363162233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:56:17 compute-0 ceph-mon[75144]: pgmap v1539: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/363162233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:56:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/363162233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:56:18 compute-0 nova_compute[248866]: 2025-11-25 20:56:18.054 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1540: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.680508) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104178680531, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 2318114, "memory_usage": 2357240, "flush_reason": "Manual Compaction"}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104178692771, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2258938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29291, "largest_seqno": 31343, "table_properties": {"data_size": 2249499, "index_size": 5997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18696, "raw_average_key_size": 20, "raw_value_size": 2230779, "raw_average_value_size": 2398, "num_data_blocks": 270, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764103946, "oldest_key_time": 1764103946, "file_creation_time": 1764104178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 12411 microseconds, and 6345 cpu microseconds.
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.692909) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2258938 bytes OK
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.692939) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.694599) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.694627) EVENT_LOG_v1 {"time_micros": 1764104178694617, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.694652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2309516, prev total WAL file size 2309516, number of live WAL files 2.
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.695870) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2205KB)], [68(5378KB)]
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104178695910, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 7766555, "oldest_snapshot_seqno": -1}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 4789 keys, 6595002 bytes, temperature: kUnknown
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104178753062, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 6595002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6561605, "index_size": 20295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12037, "raw_key_size": 117533, "raw_average_key_size": 24, "raw_value_size": 6474209, "raw_average_value_size": 1351, "num_data_blocks": 850, "num_entries": 4789, "num_filter_entries": 4789, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764104178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.753442) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 6595002 bytes
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.755202) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.6 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 5.3 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5303, records dropped: 514 output_compression: NoCompression
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.755238) EVENT_LOG_v1 {"time_micros": 1764104178755222, "job": 38, "event": "compaction_finished", "compaction_time_micros": 57263, "compaction_time_cpu_micros": 33821, "output_level": 6, "num_output_files": 1, "total_output_size": 6595002, "num_input_records": 5303, "num_output_records": 4789, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104178756232, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104178758447, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.695786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.758529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.758535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.758539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.758542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:18 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:18.758544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:19 compute-0 ceph-mon[75144]: pgmap v1540: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1541: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:21 compute-0 ceph-mon[75144]: pgmap v1541: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:21 compute-0 podman[281768]: 2025-11-25 20:56:21.995103233 +0000 UTC m=+0.078997803 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 20:56:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1542: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:23 compute-0 ceph-mon[75144]: pgmap v1542: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:24 compute-0 podman[281788]: 2025-11-25 20:56:24.031589011 +0000 UTC m=+0.120968192 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:56:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1543: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:25 compute-0 ceph-mon[75144]: pgmap v1543: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1544: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:56:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:56:27 compute-0 ceph-mon[75144]: pgmap v1544: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1545: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:29 compute-0 ceph-mon[75144]: pgmap v1545: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1546: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:31 compute-0 podman[281809]: 2025-11-25 20:56:31.04304672 +0000 UTC m=+0.132913316 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 20:56:31 compute-0 ceph-mon[75144]: pgmap v1546: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1547: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:33 compute-0 ceph-mon[75144]: pgmap v1547: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1548: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:35 compute-0 ceph-mon[75144]: pgmap v1548: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.809365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104195809610, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 374, "num_deletes": 250, "total_data_size": 159202, "memory_usage": 165568, "flush_reason": "Manual Compaction"}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104195814467, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 140084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31344, "largest_seqno": 31717, "table_properties": {"data_size": 137874, "index_size": 374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6043, "raw_average_key_size": 20, "raw_value_size": 133476, "raw_average_value_size": 446, "num_data_blocks": 17, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764104179, "oldest_key_time": 1764104179, "file_creation_time": 1764104195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 5141 microseconds, and 2380 cpu microseconds.
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.814519) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 140084 bytes OK
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.814541) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.816566) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.816590) EVENT_LOG_v1 {"time_micros": 1764104195816582, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.816613) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 156766, prev total WAL file size 156766, number of live WAL files 2.
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.817197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323531' seq:72057594037927935, type:22 .. '6D6772737461740031353032' seq:0, type:0; will stop at (end)
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(136KB)], [71(6440KB)]
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104195817252, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 6735086, "oldest_snapshot_seqno": -1}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 4583 keys, 4680735 bytes, temperature: kUnknown
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104195855138, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 4680735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4653429, "index_size": 14717, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 113431, "raw_average_key_size": 24, "raw_value_size": 4574275, "raw_average_value_size": 998, "num_data_blocks": 617, "num_entries": 4583, "num_filter_entries": 4583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764104195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.855403) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 4680735 bytes
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.856850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.4 rd, 123.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.3 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(81.5) write-amplify(33.4) OK, records in: 5088, records dropped: 505 output_compression: NoCompression
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.856880) EVENT_LOG_v1 {"time_micros": 1764104195856867, "job": 40, "event": "compaction_finished", "compaction_time_micros": 37956, "compaction_time_cpu_micros": 27093, "output_level": 6, "num_output_files": 1, "total_output_size": 4680735, "num_input_records": 5088, "num_output_records": 4583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104195857975, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104195860716, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.817115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.860999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.861006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.861009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.861012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:35 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:56:35.861014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:56:36 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1549: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:37 compute-0 ceph-mon[75144]: pgmap v1549: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:38 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1550: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:39 compute-0 ceph-mon[75144]: pgmap v1550: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:40 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1551: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:40 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:41 compute-0 ceph-mon[75144]: pgmap v1551: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:42 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1552: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:43 compute-0 ceph-mon[75144]: pgmap v1552: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:44 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1553: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:45 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:45 compute-0 ceph-mon[75144]: pgmap v1553: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:46 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1554: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:47 compute-0 ceph-mon[75144]: pgmap v1554: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:48 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1555: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:56:48.975 158053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:56:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:56:48.976 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:56:48 compute-0 ovn_metadata_agent[158048]: 2025-11-25 20:56:48.976 158053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:56:49 compute-0 ceph-mon[75144]: pgmap v1555: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:50 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1556: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:50 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:51 compute-0 ceph-mon[75144]: pgmap v1556: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:52 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1557: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:52 compute-0 podman[281835]: 2025-11-25 20:56:52.999328395 +0000 UTC m=+0.088826821 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 20:56:53 compute-0 ceph-mon[75144]: pgmap v1557: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:54 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1558: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:54 compute-0 podman[281855]: 2025-11-25 20:56:54.997406721 +0000 UTC m=+0.084957225 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:56:55 compute-0 nova_compute[248866]: 2025-11-25 20:56:55.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:55 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:56:55 compute-0 ceph-mon[75144]: pgmap v1558: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:56:56 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1559: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:56:56 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Optimize plan auto_2025-11-25_20:56:57
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [balancer INFO root] do_upmap
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [balancer INFO root] prepared 0/10 changes
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:56:57 compute-0 ceph-mgr[75443]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 20:56:57 compute-0 ceph-mon[75144]: pgmap v1559: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:58 compute-0 nova_compute[248866]: 2025-11-25 20:56:58.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:58 compute-0 nova_compute[248866]: 2025-11-25 20:56:58.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:58 compute-0 nova_compute[248866]: 2025-11-25 20:56:58.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 20:56:58 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1560: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:56:59 compute-0 nova_compute[248866]: 2025-11-25 20:56:59.043 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:56:59 compute-0 ceph-mon[75144]: pgmap v1560: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:00 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1561: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:00 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:01 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:01 compute-0 ceph-mon[75144]: pgmap v1561: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:02 compute-0 podman[281875]: 2025-11-25 20:57:02.040048317 +0000 UTC m=+0.127952952 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1562: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:02 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.041 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.071 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.072 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.073 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.073 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.074 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:57:03 compute-0 sshd-session[281903]: Accepted publickey for zuul from 192.168.122.10 port 58782 ssh2: ECDSA SHA256:1vkA12xWndKI+ZPO5GiwOFoA6r5oma6LWzXAaMRRAro
Nov 25 20:57:03 compute-0 systemd-logind[789]: New session 52 of user zuul.
Nov 25 20:57:03 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 25 20:57:03 compute-0 sshd-session[281903]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 20:57:03 compute-0 sudo[281926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 20:57:03 compute-0 sudo[281926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 20:57:03 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:57:03 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272329591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.537 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.765 248870 WARNING nova.virt.libvirt.driver [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.766 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5292MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.766 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 20:57:03 compute-0 nova_compute[248866]: 2025-11-25 20:57:03.766 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 20:57:03 compute-0 ceph-mon[75144]: pgmap v1562: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:03 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2272329591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.103 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.104 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.203 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing inventories for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.286 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating ProviderTree inventory for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.286 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Updating inventory in ProviderTree for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.300 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing aggregate associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.350 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Refreshing trait associations for resource provider 26ab8f11-6940-49dd-985d-e4f9e55b992f, traits: HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 20:57:04 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1563: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:04 compute-0 ceph-mgr[75443]: [devicehealth INFO root] Check health
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.382 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 20:57:04 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 20:57:04 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578111677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.859 248870 DEBUG oslo_concurrency.processutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.864 248870 DEBUG nova.compute.provider_tree [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed in ProviderTree for provider: 26ab8f11-6940-49dd-985d-e4f9e55b992f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.879 248870 DEBUG nova.scheduler.client.report [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Inventory has not changed for provider 26ab8f11-6940-49dd-985d-e4f9e55b992f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.881 248870 DEBUG nova.compute.resource_tracker [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 20:57:04 compute-0 nova_compute[248866]: 2025-11-25 20:57:04.881 248870 DEBUG oslo_concurrency.lockutils [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 20:57:04 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/578111677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 20:57:05 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:05 compute-0 nova_compute[248866]: 2025-11-25 20:57:05.877 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:57:05 compute-0 nova_compute[248866]: 2025-11-25 20:57:05.878 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:57:05 compute-0 ceph-mon[75144]: pgmap v1563: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:06 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:06 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:06 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1564: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:06 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14558 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:07 compute-0 nova_compute[248866]: 2025-11-25 20:57:07.042 248870 DEBUG oslo_service.periodic_task [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 20:57:07 compute-0 nova_compute[248866]: 2025-11-25 20:57:07.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 20:57:07 compute-0 nova_compute[248866]: 2025-11-25 20:57:07.043 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 20:57:07 compute-0 nova_compute[248866]: 2025-11-25 20:57:07.081 248870 DEBUG nova.compute.manager [None req-1c5b07f1-89a4-4cb7-b885-b6e3801b911d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 20:57:07 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 20:57:07 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950280462' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 20:57:07 compute-0 ceph-mon[75144]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:07 compute-0 ceph-mon[75144]: pgmap v1564: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:07 compute-0 ceph-mon[75144]: from='client.14558 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:07 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2950280462' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 20:57:08 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1565: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:09 compute-0 sudo[282190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:09 compute-0 sudo[282190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:09 compute-0 sudo[282190]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:09 compute-0 sudo[282215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:57:09 compute-0 sudo[282215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:09 compute-0 sudo[282215]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:09 compute-0 sudo[282241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:09 compute-0 sudo[282241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:09 compute-0 sudo[282241]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:09 compute-0 sudo[282269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 20:57:09 compute-0 sudo[282269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:09 compute-0 ceph-mon[75144]: pgmap v1565: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:10 compute-0 sudo[282269]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:57:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 20:57:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 20:57:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:57:10 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 1256fbfb-593e-470a-8795-cd48d3da67b0 does not exist
Nov 25 20:57:10 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev f88c493a-7404-428f-bcc6-e9226f1eb924 does not exist
Nov 25 20:57:10 compute-0 ceph-mgr[75443]: [progress WARNING root] complete: ev 79332440-b6ff-480a-ae44-97fa4eff671c does not exist
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 20:57:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 20:57:10 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:57:10 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:57:10 compute-0 sudo[282335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:10 compute-0 sudo[282335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:10 compute-0 sudo[282335]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:10 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1566: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:10 compute-0 sudo[282362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:57:10 compute-0 sudo[282362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:10 compute-0 sudo[282362]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:10 compute-0 sudo[282388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:10 compute-0 sudo[282388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:10 compute-0 sudo[282388]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:10 compute-0 sudo[282413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 20:57:10 compute-0 sudo[282413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:10 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:57:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 20:57:10 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.108742937 +0000 UTC m=+0.059625959 container create 38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:57:11 compute-0 systemd[1]: Started libpod-conmon-38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c.scope.
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.080376357 +0000 UTC m=+0.031259439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:57:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.194056231 +0000 UTC m=+0.144939283 container init 38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jackson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.20475223 +0000 UTC m=+0.155635232 container start 38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.208548934 +0000 UTC m=+0.159431956 container attach 38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 20:57:11 compute-0 priceless_jackson[282494]: 167 167
Nov 25 20:57:11 compute-0 systemd[1]: libpod-38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c.scope: Deactivated successfully.
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.214613958 +0000 UTC m=+0.165496990 container died 38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 20:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b265a341b1776c38cbc73a975d8668a07d77f71d95b793f37b17ed0808c91c87-merged.mount: Deactivated successfully.
Nov 25 20:57:11 compute-0 podman[282477]: 2025-11-25 20:57:11.257525052 +0000 UTC m=+0.208408054 container remove 38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:57:11 compute-0 systemd[1]: libpod-conmon-38c830ea9a262a29e9e05ae70bff3fe085ab59b803a2b3c8e6aaf19103e1944c.scope: Deactivated successfully.
Nov 25 20:57:11 compute-0 podman[282521]: 2025-11-25 20:57:11.475522045 +0000 UTC m=+0.078331186 container create 63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_proskuriakova, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 20:57:11 compute-0 systemd[1]: Started libpod-conmon-63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e.scope.
Nov 25 20:57:11 compute-0 podman[282521]: 2025-11-25 20:57:11.443692642 +0000 UTC m=+0.046501823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:57:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c924d376e1218dc9cab24fe9a81ed38930486c0b97c5b4e5783560cd07eab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c924d376e1218dc9cab24fe9a81ed38930486c0b97c5b4e5783560cd07eab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c924d376e1218dc9cab24fe9a81ed38930486c0b97c5b4e5783560cd07eab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c924d376e1218dc9cab24fe9a81ed38930486c0b97c5b4e5783560cd07eab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c924d376e1218dc9cab24fe9a81ed38930486c0b97c5b4e5783560cd07eab5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:11 compute-0 podman[282521]: 2025-11-25 20:57:11.599381624 +0000 UTC m=+0.202190755 container init 63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:57:11 compute-0 podman[282521]: 2025-11-25 20:57:11.613081896 +0000 UTC m=+0.215891037 container start 63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:57:11 compute-0 podman[282521]: 2025-11-25 20:57:11.623939791 +0000 UTC m=+0.226748932 container attach 63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 20:57:11 compute-0 ceph-mon[75144]: pgmap v1566: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:12 compute-0 ovs-vsctl[282567]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 20:57:12 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1567: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:12 compute-0 dreamy_proskuriakova[282536]: --> passed data devices: 0 physical, 3 LVM
Nov 25 20:57:12 compute-0 dreamy_proskuriakova[282536]: --> relative data size: 1.0
Nov 25 20:57:12 compute-0 dreamy_proskuriakova[282536]: --> All data devices are unavailable
Nov 25 20:57:12 compute-0 systemd[1]: libpod-63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e.scope: Deactivated successfully.
Nov 25 20:57:12 compute-0 podman[282521]: 2025-11-25 20:57:12.834721112 +0000 UTC m=+1.437530213 container died 63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 20:57:12 compute-0 systemd[1]: libpod-63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e.scope: Consumed 1.130s CPU time.
Nov 25 20:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c924d376e1218dc9cab24fe9a81ed38930486c0b97c5b4e5783560cd07eab5-merged.mount: Deactivated successfully.
Nov 25 20:57:12 compute-0 podman[282521]: 2025-11-25 20:57:12.896906548 +0000 UTC m=+1.499715669 container remove 63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:57:12 compute-0 systemd[1]: libpod-conmon-63feeb04d201400841f4371036c6cc63b6203e81bf6ce1441782876c8ffc8d8e.scope: Deactivated successfully.
Nov 25 20:57:12 compute-0 sudo[282413]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:13 compute-0 sudo[282639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:13 compute-0 sudo[282639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:13 compute-0 sudo[282639]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:13 compute-0 sudo[282670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:57:13 compute-0 sudo[282670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:13 compute-0 sudo[282670]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:13 compute-0 sudo[282698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:13 compute-0 sudo[282698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:13 compute-0 sudo[282698]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:13 compute-0 sudo[282739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- lvm list --format json
Nov 25 20:57:13 compute-0 sudo[282739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:13 compute-0 virtqemud[248779]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 20:57:13 compute-0 virtqemud[248779]: hostname: compute-0
Nov 25 20:57:13 compute-0 virtqemud[248779]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 20:57:13 compute-0 virtqemud[248779]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 20:57:13 compute-0 virtqemud[248779]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.781748419 +0000 UTC m=+0.069658911 container create 4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:57:13 compute-0 systemd[1]: Started libpod-conmon-4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326.scope.
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.753831312 +0000 UTC m=+0.041741824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:57:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.907737926 +0000 UTC m=+0.195648488 container init 4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.919921958 +0000 UTC m=+0.207832430 container start 4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.924219634 +0000 UTC m=+0.212130166 container attach 4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:57:13 compute-0 elastic_gates[282921]: 167 167
Nov 25 20:57:13 compute-0 systemd[1]: libpod-4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326.scope: Deactivated successfully.
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.931753208 +0000 UTC m=+0.219663770 container died 4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc654a16a842e1d36ed16ddba31b9c114990325984dee8adfdda7c517dea6fc-merged.mount: Deactivated successfully.
Nov 25 20:57:13 compute-0 ceph-mon[75144]: pgmap v1567: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:13 compute-0 podman[282884]: 2025-11-25 20:57:13.99817864 +0000 UTC m=+0.286089112 container remove 4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:57:14 compute-0 systemd[1]: libpod-conmon-4515c57e3ce85840b4312fc74b22add263c0e207d7c937e60d27debb9812e326.scope: Deactivated successfully.
Nov 25 20:57:14 compute-0 podman[283020]: 2025-11-25 20:57:14.204747493 +0000 UTC m=+0.056575446 container create f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 20:57:14 compute-0 systemd[1]: Started libpod-conmon-f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7.scope.
Nov 25 20:57:14 compute-0 podman[283020]: 2025-11-25 20:57:14.176677391 +0000 UTC m=+0.028505364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:57:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dd4a1893e78cb14255c77e0bfd47cfff687f472c5d14140e164b79d79a95ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dd4a1893e78cb14255c77e0bfd47cfff687f472c5d14140e164b79d79a95ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dd4a1893e78cb14255c77e0bfd47cfff687f472c5d14140e164b79d79a95ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dd4a1893e78cb14255c77e0bfd47cfff687f472c5d14140e164b79d79a95ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:14 compute-0 podman[283020]: 2025-11-25 20:57:14.305642789 +0000 UTC m=+0.157470792 container init f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 20:57:14 compute-0 podman[283020]: 2025-11-25 20:57:14.313258486 +0000 UTC m=+0.165086439 container start f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_keldysh, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 20:57:14 compute-0 podman[283020]: 2025-11-25 20:57:14.316264658 +0000 UTC m=+0.168092631 container attach f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 20:57:14 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1568: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:14 compute-0 lvm[283120]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 20:57:14 compute-0 lvm[283120]: VG ceph_vg1 finished
Nov 25 20:57:14 compute-0 lvm[283118]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 20:57:14 compute-0 lvm[283118]: VG ceph_vg0 finished
Nov 25 20:57:14 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14562 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:14 compute-0 lvm[283146]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 20:57:14 compute-0 lvm[283146]: VG ceph_vg2 finished
Nov 25 20:57:14 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14566 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:14 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 20:57:14 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3053828073' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 20:57:14 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3053828073' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 20:57:15 compute-0 great_keldysh[283084]: {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:     "0": [
Nov 25 20:57:15 compute-0 great_keldysh[283084]:         {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "devices": [
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "/dev/loop3"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             ],
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_name": "ceph_lv0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_size": "21470642176",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a2211a-2b5d-4914-9a66-9743102e8fa4,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "name": "ceph_lv0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "tags": {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.block_uuid": "q0RWfM-RZQI-hj56-3om1-R62y-78wd-7VaQOm",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cluster_name": "ceph",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.crush_device_class": "",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.encrypted": "0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osd_fsid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osd_id": "0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.type": "block",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.vdo": "0"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             },
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "type": "block",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "vg_name": "ceph_vg0"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:         }
Nov 25 20:57:15 compute-0 great_keldysh[283084]:     ],
Nov 25 20:57:15 compute-0 great_keldysh[283084]:     "1": [
Nov 25 20:57:15 compute-0 great_keldysh[283084]:         {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "devices": [
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "/dev/loop4"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             ],
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_name": "ceph_lv1",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_size": "21470642176",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e844079-8f15-40a1-8d48-4a531b96b291,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "name": "ceph_lv1",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "tags": {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.block_uuid": "ZVNQ7g-e6Lx-dy64-97Ju-ySzY-3mui-mlWdf2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cluster_name": "ceph",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.crush_device_class": "",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.encrypted": "0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osd_fsid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osd_id": "1",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.type": "block",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.vdo": "0"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             },
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "type": "block",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "vg_name": "ceph_vg1"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:         }
Nov 25 20:57:15 compute-0 great_keldysh[283084]:     ],
Nov 25 20:57:15 compute-0 great_keldysh[283084]:     "2": [
Nov 25 20:57:15 compute-0 great_keldysh[283084]:         {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "devices": [
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "/dev/loop5"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             ],
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_name": "ceph_lv2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_size": "21470642176",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=712dd110-763a-5547-8ef7-acda1414fdce,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21cf5470-2713-4831-8402-4fccd506c64e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "lv_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "name": "ceph_lv2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "tags": {
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.block_uuid": "fqtbQK-41Om-E3gR-umMe-Rv8z-rXp8-3QHRdp",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cluster_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.cluster_name": "ceph",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.crush_device_class": "",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.encrypted": "0",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osd_fsid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osd_id": "2",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.type": "block",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:                 "ceph.vdo": "0"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             },
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "type": "block",
Nov 25 20:57:15 compute-0 great_keldysh[283084]:             "vg_name": "ceph_vg2"
Nov 25 20:57:15 compute-0 great_keldysh[283084]:         }
Nov 25 20:57:15 compute-0 great_keldysh[283084]:     ]
Nov 25 20:57:15 compute-0 great_keldysh[283084]: }
Nov 25 20:57:15 compute-0 systemd[1]: libpod-f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7.scope: Deactivated successfully.
Nov 25 20:57:15 compute-0 podman[283020]: 2025-11-25 20:57:15.144619906 +0000 UTC m=+0.996447909 container died f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_keldysh, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 20:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-46dd4a1893e78cb14255c77e0bfd47cfff687f472c5d14140e164b79d79a95ca-merged.mount: Deactivated successfully.
Nov 25 20:57:15 compute-0 podman[283020]: 2025-11-25 20:57:15.207334517 +0000 UTC m=+1.059162450 container remove f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_keldysh, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:57:15 compute-0 systemd[1]: libpod-conmon-f2db1db6d1761de597424745ebcf728c0a5b06c2ebf424fbd7ab845ea6ac41d7.scope: Deactivated successfully.
Nov 25 20:57:15 compute-0 sudo[282739]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 20:57:15 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604054962' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:57:15 compute-0 sudo[283306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:15 compute-0 sudo[283306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:15 compute-0 sudo[283306]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:15 compute-0 sudo[283361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 20:57:15 compute-0 sudo[283361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:15 compute-0 sudo[283361]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:15 compute-0 sudo[283402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:15 compute-0 sudo[283402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:15 compute-0 sudo[283402]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:15 compute-0 sudo[283436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/712dd110-763a-5547-8ef7-acda1414fdce/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 712dd110-763a-5547-8ef7-acda1414fdce -- raw list --format json
Nov 25 20:57:15 compute-0 sudo[283436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:15 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mgr[75443]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 20:57:15 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:57:15.583+0000 7f92c2df5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 20:57:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 20:57:15 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942763549' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:15 compute-0 podman[283550]: 2025-11-25 20:57:15.958204464 +0000 UTC m=+0.053325108 container create 3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:57:15 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 20:57:15 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793060208' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: pgmap v1568: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:15 compute-0 ceph-mon[75144]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3604054962' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2942763549' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 20:57:15 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2793060208' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 20:57:16 compute-0 systemd[1]: Started libpod-conmon-3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2.scope.
Nov 25 20:57:16 compute-0 podman[283550]: 2025-11-25 20:57:15.938013846 +0000 UTC m=+0.033134500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:57:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:57:16 compute-0 podman[283550]: 2025-11-25 20:57:16.053228871 +0000 UTC m=+0.148349545 container init 3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:57:16 compute-0 podman[283550]: 2025-11-25 20:57:16.059874102 +0000 UTC m=+0.154994766 container start 3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 20:57:16 compute-0 podman[283550]: 2025-11-25 20:57:16.063609283 +0000 UTC m=+0.158729937 container attach 3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:57:16 compute-0 compassionate_meitner[283570]: 167 167
Nov 25 20:57:16 compute-0 systemd[1]: libpod-3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2.scope: Deactivated successfully.
Nov 25 20:57:16 compute-0 conmon[283570]: conmon 3428f011725a141dd189 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2.scope/container/memory.events
Nov 25 20:57:16 compute-0 podman[283550]: 2025-11-25 20:57:16.068211418 +0000 UTC m=+0.163332082 container died 3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9c21bb179ca34b61e56a6af0ab4f51a1569197504afa2bc083397b5551b16c-merged.mount: Deactivated successfully.
Nov 25 20:57:16 compute-0 podman[283550]: 2025-11-25 20:57:16.11551884 +0000 UTC m=+0.210639494 container remove 3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 20:57:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 20:57:16 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950289624' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 20:57:16 compute-0 systemd[1]: libpod-conmon-3428f011725a141dd18917a3a2c27945d5afc889bde7101378aa6db3584760d2.scope: Deactivated successfully.
Nov 25 20:57:16 compute-0 podman[283626]: 2025-11-25 20:57:16.293973941 +0000 UTC m=+0.039654346 container create 2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 20:57:16 compute-0 systemd[1]: Started libpod-conmon-2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762.scope.
Nov 25 20:57:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 20:57:16 compute-0 podman[283626]: 2025-11-25 20:57:16.275052158 +0000 UTC m=+0.020732563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 20:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9cfae1de6a30b1f06d2cc234a307a73ca7a60fb69bc525889b5dace474b764/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9cfae1de6a30b1f06d2cc234a307a73ca7a60fb69bc525889b5dace474b764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9cfae1de6a30b1f06d2cc234a307a73ca7a60fb69bc525889b5dace474b764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9cfae1de6a30b1f06d2cc234a307a73ca7a60fb69bc525889b5dace474b764/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 20:57:16 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1569: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:16 compute-0 podman[283626]: 2025-11-25 20:57:16.391655911 +0000 UTC m=+0.137336296 container init 2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 20:57:16 compute-0 podman[283626]: 2025-11-25 20:57:16.400066819 +0000 UTC m=+0.145747204 container start 2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 20:57:16 compute-0 podman[283626]: 2025-11-25 20:57:16.403317567 +0000 UTC m=+0.148997952 container attach 2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 20:57:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 20:57:16 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471605150' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 20:57:16 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14582 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:16 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 20:57:16 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713536320' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 20:57:16 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2950289624' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: pgmap v1569: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3471605150' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: from='client.14582 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1713536320' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/896592147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/896592147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513611996' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 20:57:17 compute-0 nifty_germain[283661]: {
Nov 25 20:57:17 compute-0 nifty_germain[283661]:     "21cf5470-2713-4831-8402-4fccd506c64e": {
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "osd_id": 2,
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "osd_uuid": "21cf5470-2713-4831-8402-4fccd506c64e",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "type": "bluestore"
Nov 25 20:57:17 compute-0 nifty_germain[283661]:     },
Nov 25 20:57:17 compute-0 nifty_germain[283661]:     "7e844079-8f15-40a1-8d48-4a531b96b291": {
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "osd_id": 1,
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "osd_uuid": "7e844079-8f15-40a1-8d48-4a531b96b291",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "type": "bluestore"
Nov 25 20:57:17 compute-0 nifty_germain[283661]:     },
Nov 25 20:57:17 compute-0 nifty_germain[283661]:     "f0a2211a-2b5d-4914-9a66-9743102e8fa4": {
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "ceph_fsid": "712dd110-763a-5547-8ef7-acda1414fdce",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "osd_id": 0,
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "osd_uuid": "f0a2211a-2b5d-4914-9a66-9743102e8fa4",
Nov 25 20:57:17 compute-0 nifty_germain[283661]:         "type": "bluestore"
Nov 25 20:57:17 compute-0 nifty_germain[283661]:     }
Nov 25 20:57:17 compute-0 nifty_germain[283661]: }
Nov 25 20:57:17 compute-0 systemd[1]: libpod-2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762.scope: Deactivated successfully.
Nov 25 20:57:17 compute-0 podman[283626]: 2025-11-25 20:57:17.328543583 +0000 UTC m=+1.074223988 container died 2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 20:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab9cfae1de6a30b1f06d2cc234a307a73ca7a60fb69bc525889b5dace474b764-merged.mount: Deactivated successfully.
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738440286' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 20:57:17 compute-0 podman[283626]: 2025-11-25 20:57:17.390241816 +0000 UTC m=+1.135922211 container remove 2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_germain, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 20:57:17 compute-0 systemd[1]: libpod-conmon-2c094ce308aa0cf9fb4a3b3e72674bdf7b08cf84825096f2e8bd3bc43d7f5762.scope: Deactivated successfully.
Nov 25 20:57:17 compute-0 sudo[283436]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:57:17 compute-0 sudo[283845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 20:57:17 compute-0 sudo[283845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:17 compute-0 sudo[283845]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:17 compute-0 sudo[283882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 20:57:17 compute-0 sudo[283882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107327389' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:57:17 compute-0 sudo[283882]: pam_unix(sudo:session): session closed for user root
Nov 25 20:57:17 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 20:57:17 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3251400432' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 20:57:18 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794332622' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/896592147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.10:0/896592147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3513611996' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2738440286' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='mgr.14132 192.168.122.100:0/2650958737' entity='mgr.compute-0.hdjasd' 
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3107327389' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3251400432' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2794332622' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14602 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mgr[75443]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 20:57:18 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:57:18.191+0000 7f92c2df5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 20:57:18 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1570: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 20:57:18 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495314014' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 20:57:18 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306931745' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:22:57.276155+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:22:58.276369+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:22:59.276512+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:00.276640+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:01.276868+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:02.277042+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:03.277232+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:04.277404+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:05.277541+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:06.277700+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:07.277877+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:08.277998+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:09.278209+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:10.278328+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:11.278471+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:12.279146+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:13.279287+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:14.279428+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:15.279587+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:16.279750+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:17.279926+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:18.280124+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:19.280295+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:20.280474+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:21.280684+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:22.280852+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:23.281008+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:24.281154+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:25.281288+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:26.281418+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:27.281566+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:28.281728+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:29.281932+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:30.282108+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:31.282342+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:32.282495+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:33.282768+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:34.282956+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:35.283841+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:36.284105+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:37.284341+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:38.284617+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:39.284928+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:40.285170+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:41.285435+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:42.285627+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:43.285785+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:44.285998+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:45.286180+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:46.286395+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:47.286636+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:48.286832+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:49.287064+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:50.287361+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:51.287630+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:52.287873+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:53.288061+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:54.288181+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:55.288286+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:56.288441+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:57.288581+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:58.288748+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:59.288889+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:00.289004+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:01.289145+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:02.289321+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:03.289496+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:04.289661+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:05.289878+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:06.290116+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:07.290372+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:08.290617+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:09.290910+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:10.291182+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:11.291468+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:12.291623+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:13.291779+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:14.291967+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:15.292164+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:16.292310+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:17.292450+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:18.395096+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:19.395385+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:20.395631+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:21.396435+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:22.396641+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:23.396844+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:24.397022+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:25.397257+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:26.397490+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:27.397693+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:28.397887+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:29.398068+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:30.398282+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:31.398505+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:32.398596+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:33.398731+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:34.398870+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:35.399039+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:36.399815+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:37.399950+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:38.400061+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:39.400252+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:40.400411+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:41.400595+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:42.400763+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:43.400964+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:44.401187+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:45.401459+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:46.401707+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:47.402115+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:48.402332+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:49.402475+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:50.402664+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:51.403000+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:52.403148+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:53.403279+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:54.403477+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:55.403761+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:56.404032+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:57.404292+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:58.404669+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:59.404943+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:00.405156+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:01.405711+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:02.406028+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:03.406266+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:04.406649+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:05.407059+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:06.407319+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:07.407693+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:08.408068+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:09.408430+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:10.408772+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:11.409142+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:12.409358+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:13.409593+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:14.409864+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:15.410286+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:16.410669+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:17.411141+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:18.411543+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:19.411699+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:20.411953+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:21.412167+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:22.412322+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:23.418699+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:24.418882+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:25.419075+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:26.419227+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:27.419375+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:28.419548+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:29.419724+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:30.419868+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:31.420072+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:32.420226+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:33.420450+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:34.420629+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:35.420839+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:36.421004+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:37.421186+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:38.421391+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:39.421619+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:40.421782+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:41.421955+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:42.422117+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:43.422320+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:44.422492+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:45.422668+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:46.422899+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:47.423157+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:48.423379+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:49.423582+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:50.423745+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:51.423947+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:52.424103+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:53.424294+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:54.424459+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:55.424621+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:56.424856+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:57.425043+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:58.425209+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:59.425365+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:00.593170+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:01.593367+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:02.593559+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:03.593736+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:04.593971+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:05.594190+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:06.594355+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:07.594517+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:08.594735+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:09.594935+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:10.595136+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:11.595409+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:12.595600+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:13.595764+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:14.596077+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:15.596309+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:16.596565+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:17.596752+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:18.600468+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:19.600687+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:20.601050+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:21.601412+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:22.601574+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:23.601721+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:24.601876+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:25.602017+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:26.602245+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:27.602443+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:28.602605+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:29.602744+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:30.602903+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:31.603121+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:32.603325+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:33.603528+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:34.603682+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:35.603857+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d979090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b17d9791f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:36.604213+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:37.604963+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:38.605119+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:39.605277+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:40.605512+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:41.605690+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:42.607077+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:43.607357+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:44.607666+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:45.608307+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:46.608692+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:47.608860+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:48.609181+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:49.609506+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:50.609781+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:51.610177+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:52.610386+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:53.610608+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:54.611051+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:55.611485+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:56.611624+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:57.611769+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:58.612064+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:59.612310+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:00.967069+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:01.967319+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:02.967531+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:03.967690+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:04.967872+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:05.968036+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:06.968204+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:07.968432+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:08.968654+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:09.968890+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:10.969068+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:11.969278+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:12.969449+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:13.969642+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:14.969852+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:15.970036+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:16.970259+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:17.970453+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:18.970630+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:19.970836+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:20.971067+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:21.971284+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:22.971444+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:23.971564+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:24.971740+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:25.971934+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:26.972118+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:27.972312+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:28.972560+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:29.972741+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:30.973003+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:31.973245+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:32.973445+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:33.973621+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:34.973831+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:35.974149+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:36.974342+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:37.974543+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:38.974722+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:39.974929+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:40.975136+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:41.975473+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:42.975701+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:43.975914+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:44.976089+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:45.976297+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:46.976453+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:47.976636+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:48.976835+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:49.976986+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:50.977183+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:51.977336+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:52.977481+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:53.977677+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:54.977885+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:55.978057+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:56.978279+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:57.978512+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:58.978713+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:59.978929+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:00.979129+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:01.979362+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:02.979581+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:03.980464+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:04.980648+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:05.980862+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:06.981024+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:07.981184+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:08.981358+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:09.981508+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:10.981702+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:11.981901+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:12.982024+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:13.982215+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:14.982409+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:15.982630+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:16.982897+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:17.983021+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:18.983201+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:19.983361+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:20.983502+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:21.983694+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:22.983929+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:23.984115+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:24.984294+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:25.984508+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:26.984704+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:27.984942+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:28.985200+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:29.985395+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:30.985612+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:31.985862+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:32.986066+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:33.986328+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:34.986573+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:35.986748+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:36.986950+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:37.987183+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:38.987451+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:39.987743+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:40.988025+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:41.988264+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:42.988475+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:43.988718+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:44.988954+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:45.989209+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:46.989469+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:47.989715+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:48.989947+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:49.990161+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:50.990424+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:51.990684+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:52.990911+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:53.991182+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:54.991365+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:55.991549+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:56.991745+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:57.991923+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:58.992132+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:59.992390+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:00.992620+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:01.992902+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:02.993159+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:03.993326+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:04.993588+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:05.993893+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:06.994155+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:07.994400+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:08.994596+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:09.994952+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:10.995198+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:11.995416+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:12.995632+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:13.995852+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:14.996081+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:15.996308+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:16.996535+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:17.996879+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:18.997118+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:19.997371+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:20.997616+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:21.997905+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:22.998156+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:23.998374+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:24.998617+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:25.998860+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:26.999106+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:27.999357+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:28.999598+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:29.999891+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:31.000092+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:32.000333+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:33.000560+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:34.000769+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:35.001024+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:36.001283+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:37.001574+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:38.001871+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:39.002128+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:40.002321+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:41.002553+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:42.002907+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:43.003150+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:44.003337+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:45.003519+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:46.003731+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:47.003926+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:48.004104+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:49.004245+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:50.004422+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:51.004600+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:52.004981+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:53.005122+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:54.005267+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:55.005401+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:56.005565+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:57.005696+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:58.005834+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:59.005987+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:00.006139+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:01.171279+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:02.171496+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:03.171643+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:04.171867+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:05.172040+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:06.172200+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:07.172355+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:08.172522+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:09.172676+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:10.172880+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:11.173049+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:12.173559+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:13.173789+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:14.174017+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:15.174189+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:16.174368+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:17.174556+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:18.174761+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:19.174986+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:20.175090+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:21.175220+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:22.175409+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:23.175581+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:24.175736+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:25.175951+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:26.176153+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:27.176360+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:28.176530+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:29.176700+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:30.176858+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:31.177050+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:32.177274+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:33.177489+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:34.177734+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:35.177956+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:36.178104+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:37.178271+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:38.178445+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:39.178593+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:40.178761+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:41.178937+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:42.179165+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:43.179327+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:44.179549+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:45.179683+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:46.179862+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:47.180060+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:48.180238+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:49.180427+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:50.180573+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:51.180746+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:52.181683+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:53.182518+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:54.182978+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:55.183586+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:56.183961+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:57.184415+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:58.184829+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:59.185127+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:00.185530+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:01.185942+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:02.186364+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:03.186771+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:04.187187+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:05.187490+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:06.187782+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:07.188091+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:08.188373+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:09.188625+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:10.188966+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:11.189286+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:12.189644+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:13.189965+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:14.190204+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:15.190446+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:16.190704+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:17.190968+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:18.191134+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:19.191268+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:20.191443+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:21.191664+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:22.191948+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:23.192197+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:24.192459+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:25.192708+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:26.192954+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:27.193207+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:28.193453+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:29.193709+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:30.193945+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:31.194165+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:32.194445+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:33.194699+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:34.194973+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:35.195238+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:36.195424+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:37.195619+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:38.195877+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:39.196077+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:40.196270+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:41.196445+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:42.196648+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:43.196775+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:44.196969+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:45.197117+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:46.197248+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:47.197426+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:48.197579+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:49.197788+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:50.197958+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:51.198115+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:52.198372+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:53.198532+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:54.198681+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:55.198896+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:56.199028+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:57.199377+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:58.199584+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:59.200730+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:00.200930+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:01.201079+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:02.202843+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:03.205986+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:04.206213+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:05.209254+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:06.209652+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:07.212046+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:08.212418+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:09.213447+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:10.214466+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:11.214956+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:12.215246+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:13.215652+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:14.215852+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:15.216470+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:16.216778+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:17.217515+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:18.218219+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:19.218559+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:20.218768+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:21.218873+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:22.219084+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:23.219245+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:24.219400+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:25.219546+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:26.219736+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:27.219931+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:28.220062+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:29.220319+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:30.220585+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:31.220884+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:32.221115+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:33.221312+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:34.221497+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:35.221640+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:36.221848+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:37.222054+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:38.222301+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:39.222490+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:40.224855+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:41.225140+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:42.225469+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:43.225743+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:44.225940+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:45.226197+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:46.226468+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:47.226732+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:48.227019+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:49.227273+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:50.227520+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:51.227791+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:52.228123+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:53.228372+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:54.228591+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:55.228860+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:56.229125+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:57.229352+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:58.229643+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:59.229949+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:00.230132+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:01.230411+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:02.230687+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:03.230934+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:04.231384+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:05.231749+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:06.232156+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:07.232535+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:08.232859+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:09.233278+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:10.233439+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:11.233550+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:12.234366+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:13.234621+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:14.234848+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:15.235063+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:16.235245+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:17.235462+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:18.235642+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:19.235895+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:20.236071+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:21.236234+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:22.236516+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:23.236765+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:24.236961+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:25.237163+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:26.237434+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:27.237647+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:28.237893+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:29.238087+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:30.238235+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:31.238387+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:32.238561+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:33.238707+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:34.238877+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:35.239111+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:36.239285+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:37.239476+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:38.239695+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:39.239933+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:40.240106+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:41.240351+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:42.240548+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:43.240735+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:44.240936+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:45.241121+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:46.241324+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:47.241500+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:48.241699+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:49.241907+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:50.242072+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:51.242230+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:52.242458+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:53.242651+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:54.242832+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:55.242967+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:56.243123+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:57.243284+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:58.243451+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:59.243675+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:00.243900+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:01.244069+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:02.244220+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:03.244374+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:04.244565+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:05.244789+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:06.245993+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:07.247126+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:08.248756+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:09.249680+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:10.250561+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:11.250789+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:12.250911+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:13.251462+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:14.251781+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:15.252895+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:16.253543+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:17.255900+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:18.256656+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:19.257513+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:20.257722+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:21.258427+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:22.258963+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:23.259264+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:24.259596+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:25.259790+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:26.260353+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:27.260548+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:28.260902+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:29.261195+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:30.261406+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:31.261715+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:32.262071+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:33.262370+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:34.262882+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:35.263190+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:36.263512+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:37.263849+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:38.264175+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:39.264307+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:40.264493+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:41.264683+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:42.264924+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:43.265136+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:44.265321+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:45.265466+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:46.265707+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:47.265910+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:48.266122+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:49.266299+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:50.266492+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:51.266704+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:52.267012+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:53.267303+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:54.267539+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:55.267728+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:56.267942+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:57.268126+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:58.268274+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:59.268419+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:00.268595+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:01.268774+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:02.269180+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:03.269552+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:04.269841+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:05.270124+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:06.270360+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:07.270587+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:08.270789+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:09.270986+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:10.271145+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:11.271301+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:12.271579+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:13.271879+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:14.272021+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:15.272224+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:16.272565+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:17.272910+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:18.273090+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:19.273380+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:20.273781+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:21.279249+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:22.279633+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:23.279956+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:24.280244+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:25.280487+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:26.280714+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:27.280923+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:28.281166+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:29.281451+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:30.281681+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:31.281911+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:32.282154+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:33.282366+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:34.282560+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:35.282783+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:36.283086+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:37.283333+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:38.283530+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:39.283734+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:40.283981+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:41.284258+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:42.284540+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:43.284727+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:44.284885+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:45.285042+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:46.285180+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:47.285367+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:48.285531+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:49.285748+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:50.285948+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:51.286088+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:52.286289+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:53.286464+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:54.286620+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:55.286769+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:56.286925+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:57.287051+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:58.287201+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:59.287360+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:00.287454+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:01.287647+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:02.287857+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:03.288020+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:04.288217+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:05.288421+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:06.288657+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:07.288868+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:08.289064+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:09.289231+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:10.289463+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:11.289614+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 917504 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:12.289841+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:13.290007+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:14.290180+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:15.290679+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:16.290980+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:17.291569+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:18.294519+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:19.294768+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:20.295130+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:21.295403+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:22.295905+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:23.296093+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:24.296318+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:25.296439+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:26.296626+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:27.296903+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:28.297144+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:29.297370+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:30.297568+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:31.297784+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:32.298178+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:33.298333+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:34.298494+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:35.298737+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:36.298937+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:37.299361+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:38.299633+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:39.299864+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:40.300305+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:41.300598+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: mgrc ms_handle_reset ms_handle_reset con 0x55b17e7c9c00
Nov 25 20:57:18 compute-0 ceph-osd[91367]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/446496168
Nov 25 20:57:18 compute-0 ceph-osd[91367]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/446496168,v1:192.168.122.100:6801/446496168]
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: get_auth_request con 0x55b17fb9a800 auth_method 0
Nov 25 20:57:18 compute-0 ceph-osd[91367]: mgrc handle_mgr_configure stats_period=5
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:42.300859+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:43.301051+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:44.301232+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:45.301426+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:46.301714+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:47.301889+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:48.302134+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:49.302322+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:50.302497+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:51.302677+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:52.302889+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:53.303054+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:54.303251+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:55.303440+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:56.303670+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:57.303851+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:58.304000+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:59.304162+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:00.304314+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:01.304497+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:02.304637+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:03.304845+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:04.304982+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:05.305133+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:06.305298+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:07.305521+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:08.305689+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:09.305871+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:10.306027+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:11.306260+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:12.306478+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:13.306677+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:14.306879+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:15.307059+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:16.307248+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:17.307459+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:18.307614+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:19.307861+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:20.308112+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:21.308302+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:22.308510+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:23.308740+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:24.308996+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:25.309211+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:26.309470+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:27.309684+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:28.309837+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:29.310134+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:30.310286+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:31.310491+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:32.310641+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:33.310881+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:34.311063+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:35.311329+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:36.311601+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:37.311835+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:38.312046+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:39.312218+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:40.312446+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:41.312668+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:42.312907+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:43.313131+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:44.313388+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:45.313699+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:46.313904+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:47.314152+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:48.314394+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:49.314626+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:50.314842+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:51.315086+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:52.315376+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:53.315563+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:54.315846+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:55.316072+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:56.316337+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:57.316549+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:58.316891+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:59.317143+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:00.317346+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:01.317533+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:02.317840+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:03.318067+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:04.318307+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:05.318504+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:06.318833+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:07.319090+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:08.319309+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:09.319550+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:10.319789+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:11.319990+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:12.320983+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:13.321231+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:14.321450+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:15.321616+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:16.321850+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:17.321988+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:18.322143+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:19.322329+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:20.322532+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:21.322725+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:22.322940+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:23.323181+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:24.323323+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:25.323468+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:26.323638+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:27.323779+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:28.323955+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:29.324126+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:30.324261+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:31.324411+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:32.324621+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:33.324742+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:34.324889+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:35.325058+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:36.325191+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:37.325397+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:38.325519+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:39.325726+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:40.325919+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:41.326138+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:42.326335+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:43.326483+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:44.326682+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:45.326833+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:46.327156+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:47.327306+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:48.327499+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:49.327661+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:50.327855+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:51.327998+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:52.328230+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:53.328461+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:54.328652+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:55.328860+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:56.329028+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:57.329197+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:58.329321+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:59.329451+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:00.329609+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:01.329723+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:02.329879+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:03.329968+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:04.330086+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:05.330252+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:06.330448+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:07.330588+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:08.330865+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:09.331016+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:10.331175+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:11.331319+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:12.331497+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:13.331644+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:14.332021+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:15.332178+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:16.332314+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:17.332553+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:18.332721+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:19.332887+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:20.333089+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:21.333220+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:22.333381+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:23.333616+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:24.333772+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:25.333930+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:26.334106+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:27.334290+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:28.334501+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:29.334930+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:30.335508+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:31.335951+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:32.336288+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:33.336503+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:34.336706+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:35.337096+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:36.337325+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:37.337516+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:38.337733+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:39.337975+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:40.338157+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:41.338320+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:42.338559+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:43.338758+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:44.338942+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:45.339325+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:46.339520+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:47.339710+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:48.339876+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:49.340108+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:50.340285+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:51.340453+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:52.340719+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:53.340907+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:54.341090+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:55.341320+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:56.341509+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:57.341680+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:58.341871+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:59.342082+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:00.342235+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:01.342408+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:02.342589+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:03.342760+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:04.342922+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:05.343105+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:06.343300+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:07.343495+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:08.343683+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:09.343872+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:10.344051+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:11.344265+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:12.344465+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:13.344638+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:14.344940+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:15.345218+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:16.345453+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:17.345657+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:18.345864+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:19.346042+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:20.346299+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:21.346484+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:22.346781+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:23.347055+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:24.347315+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:25.347504+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:26.347715+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:27.347973+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:28.348137+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:29.348285+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:30.348434+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:31.348634+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:32.348917+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:33.349125+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:34.349471+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:35.349685+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:36.349874+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:37.350170+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:38.350436+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:39.350606+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:40.350871+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:41.351051+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:42.351260+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:43.351439+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:44.351597+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:45.351829+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:46.351998+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:47.352208+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:48.352407+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:49.352618+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:50.352872+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:51.352986+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:52.353267+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:53.353451+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:54.353582+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:55.353763+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:56.353968+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:57.354151+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:58.354349+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:59.354514+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:00.354684+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:01.354873+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:02.355083+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:03.355254+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:04.355475+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:05.355674+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:06.355911+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:07.356133+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:08.356344+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:09.356562+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:10.356731+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:11.356930+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:12.357166+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:13.357348+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:14.357559+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:15.357839+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:16.358086+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:17.358250+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:18.358392+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:19.358546+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:20.358879+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:21.359048+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:22.359251+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:23.359440+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:24.359612+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:25.359832+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:26.359995+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:27.360211+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:28.360404+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:29.360588+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:30.360760+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:31.360956+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:32.361179+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:33.361347+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:34.361535+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:35.361730+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:36.361877+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:37.362025+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:38.362161+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:39.362343+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:40.362494+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:41.362669+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:42.362869+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:43.363048+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:44.363187+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:45.363315+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:46.387645+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:47.387850+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:48.388052+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:49.388285+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:50.388518+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:51.388708+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:52.388939+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:53.389168+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:54.389440+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:55.389679+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:56.389915+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:57.390102+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:58.390328+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:59.390580+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:00.390734+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:01.390941+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:02.391165+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:03.391359+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:04.391596+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:05.391746+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:06.391927+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:07.392100+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:08.392311+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:09.392531+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:10.392712+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:11.392845+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:12.393030+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:13.393187+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:14.393393+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:15.393588+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:16.393769+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:17.393958+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:18.394134+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:19.394312+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:20.394501+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:21.394730+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:22.394966+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:23.395162+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:24.395393+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:25.395675+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:26.395932+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:27.396173+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:28.396452+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:29.396678+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:30.396867+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:31.397062+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:32.397469+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:33.397724+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:34.397972+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:35.398198+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:36.398444+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:37.398727+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:38.398941+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:39.399157+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:40.399411+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:41.399614+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:42.399914+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:43.400154+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:44.400436+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:45.400722+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:46.400912+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:47.401116+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:48.401275+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:49.401462+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:50.401653+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:51.401834+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:52.402079+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:53.402227+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:54.402414+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:55.402663+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:56.402903+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:57.403105+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:58.403370+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:59.404017+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:00.404196+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:01.404399+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:02.404670+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:03.404888+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:04.405079+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:05.405254+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:06.405459+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:07.405659+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:08.405883+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:09.406117+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:10.406276+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:11.406475+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:12.406701+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:13.406976+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:14.407181+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:15.407373+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:16.407625+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:17.407843+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:18.408057+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:19.408282+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:20.408514+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:21.408782+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:22.409175+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:23.409435+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:24.409583+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:25.409958+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:26.410210+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:27.410471+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:28.410767+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:29.411067+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:30.411345+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:31.411687+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:32.411996+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:33.412179+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:34.412455+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:35.413317+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:36.413623+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:37.413945+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:38.414231+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:39.414553+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:40.414848+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:41.415105+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:42.415359+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:43.415680+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:44.416059+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:45.416255+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:46.416599+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:47.416909+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:48.417246+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:49.417514+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:50.417754+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:51.418078+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:52.418394+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:53.418654+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:54.419443+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:55.419889+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:56.420163+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:57.420387+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:58.420623+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:59.420937+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:00.421215+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:01.421414+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:02.421763+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:03.422153+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:04.422463+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:05.422700+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:06.422919+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:07.423132+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:08.423420+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:09.423766+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:10.424071+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:11.424366+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:12.424650+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:13.424957+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:14.425197+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:15.425455+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:16.425630+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:17.425874+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:18.426551+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:19.427167+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:20.427478+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:21.428084+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:22.428651+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:23.429180+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:24.429642+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:25.430063+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:26.430520+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:27.430848+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:28.431191+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:29.431557+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:30.431846+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:31.432103+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:32.432446+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:33.433118+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:34.433441+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:35.433775+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:36.434285+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:37.434676+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:38.435070+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:39.435508+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:40.435925+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:41.436183+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:42.436546+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:43.436889+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:44.437143+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:45.437476+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:46.437760+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:47.438123+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:48.438444+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:49.438692+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:50.440194+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:51.441012+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:52.442593+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:53.443547+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:54.444061+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:55.445135+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:56.445412+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:57.445744+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:58.446041+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:59.446555+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:00.447370+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:01.447650+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:02.448320+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:03.448940+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:04.449417+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:05.449651+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:06.449866+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:07.450082+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:08.450540+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:09.450742+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:10.451131+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:11.451277+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:12.451531+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:13.451666+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:14.451893+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:15.452237+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:16.452448+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:17.452724+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:18.453078+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:19.453442+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:20.453714+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:21.453921+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:22.454197+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:23.454423+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:24.454695+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:25.454878+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:26.455119+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:27.455349+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:28.455540+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:29.455779+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:30.456009+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:31.456160+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:32.456368+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:33.456575+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:34.456784+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:35.457060+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:36.457277+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:37.457550+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:38.457739+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:39.457996+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:40.469193+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:41.469419+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:42.469650+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:43.469880+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:44.470161+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:45.470487+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:46.470701+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:47.470959+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:48.471115+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:49.471361+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:50.471600+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:51.471869+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:52.472117+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:53.472323+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:54.472574+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:55.472755+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:56.473700+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:57.474031+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:58.474246+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:59.475102+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:00.475495+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:01.476145+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:02.476778+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:03.477436+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:04.477951+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:05.478429+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:06.478924+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:07.479598+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:08.480561+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:09.481075+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:10.481464+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:11.481853+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:12.482176+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:13.482761+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:14.482916+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:15.483054+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:16.483303+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:17.483549+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:18.483788+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:19.484170+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:20.484506+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:21.484762+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:22.485173+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:23.485415+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:24.485571+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:25.485756+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:26.485943+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:27.486106+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:28.486273+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:29.486419+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:30.486588+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:31.486719+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:32.487006+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:33.487186+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:34.487324+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:35.487510+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:36.487679+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:37.487859+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:38.488061+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:39.488281+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:40.488449+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:41.488596+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:42.488882+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:43.488993+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:44.489195+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:45.489360+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:46.489555+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:47.489737+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:48.489919+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:49.490095+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:50.490285+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:51.490485+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:52.490716+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:53.490846+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:54.491014+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:55.491513+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:56.491911+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:57.492173+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:58.492371+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:59.492644+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:00.492972+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:01.493945+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:02.494757+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:03.495101+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:04.495312+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:05.495457+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:06.495744+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:07.495992+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:08.496213+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:09.496511+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:10.496726+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:11.496977+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:12.497288+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:13.497552+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:14.497858+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:15.498122+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:16.498337+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:17.498609+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:18.498915+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:19.499227+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:20.499489+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:21.499740+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:22.500081+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:23.500363+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:24.500571+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:25.500908+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:26.501094+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:27.501249+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:28.501456+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:29.501646+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:30.501863+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:31.502057+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:32.502268+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:33.503131+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:34.503246+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:35.503385+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:36.503531+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:37.503759+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:38.503915+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:39.504075+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:40.504245+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:41.504421+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:42.504646+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:43.504932+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:44.525058+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:45.525248+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:46.525436+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:47.525645+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:48.525772+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:49.526008+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:50.526224+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:51.526456+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:52.526689+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:53.526880+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:54.527092+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:55.527330+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:56.527588+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:57.527846+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:58.527969+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:59.528154+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:00.528336+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:01.528498+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:02.528756+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:03.528968+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:04.529185+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:05.529415+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:06.529720+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:07.529979+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:08.530201+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:09.530487+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:10.530858+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:11.531184+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:12.531581+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:13.531854+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:14.532084+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:15.532262+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:16.532574+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:17.532900+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:18.533204+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:19.533398+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:20.533679+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:21.533908+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:22.534219+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:23.534565+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:24.535267+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:25.535543+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:26.535865+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:27.536138+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:28.536345+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:29.536604+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:30.536916+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:31.537081+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:32.537269+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:33.537402+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:34.537552+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:35.537777+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:36.537994+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:37.538185+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:38.538332+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:39.538580+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:40.538901+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:41.539257+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:42.539577+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:43.539942+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:44.540231+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:45.540628+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:46.540985+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:47.541247+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:48.541604+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:49.541986+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:50.542243+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:51.542500+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:52.542866+0000)
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:18 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:18 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:18 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:18 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:53.543219+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:54.543554+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:55.543896+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:56.544225+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:57.544549+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:58.544985+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:59.545261+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:00.545781+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:01.546079+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:02.546401+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:03.546751+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:04.547140+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:05.547407+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:06.547887+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:07.548155+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:08.548442+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:09.548723+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:10.548997+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:11.549265+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:12.549543+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:13.549844+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:14.550150+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:15.550461+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:16.550717+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:17.551001+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:18.551298+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:19.551637+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:20.551980+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:21.552257+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:22.552659+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:23.552926+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:24.553168+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:25.553477+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:26.554207+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:27.554350+0000)
Nov 25 20:57:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204254951' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:28.554530+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:29.554906+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:30.555119+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:31.555290+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:32.555491+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:33.555644+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:34.555869+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:35.556096+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:36.556281+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:37.556405+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:38.556560+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:39.556857+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:40.559617+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:41.560876+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:42.564725+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:43.568200+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:44.568462+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:45.568704+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:46.568871+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:47.569011+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:48.569202+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:49.569410+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:50.569596+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:51.569906+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:52.570345+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:53.570655+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:54.570915+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:55.571165+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:56.571410+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:57.571681+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:58.571955+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:59.572320+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:00.572604+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:01.572954+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: no keepalive since 2025-11-25T20:50:31.573020+0000 (2106-02-07T06:28:15.999915+0000 seconds), reconnecting
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _reopen_session rank -1
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _add_conns ranks=[0]
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): picked mon.compute-0 con 0x55b1801b4800 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): start opening mon connection
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): _renew_subs
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): get_auth_request con 0x55b1801b4800 auth_method 0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): _init_auth method 2
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): _init_auth already have auth, reseting
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): handle_auth_reply_more payload 9
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): handle_auth_reply_more payload_len 9
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient(hunting): handle_auth_done global_id 14220 payload 293
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _finish_hunting 0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: found mon.compute-0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _finish_auth 0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:01.598749+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: handle_monmap mon_map magic: 0 v1
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: dump:
                                           epoch 1
                                           fsid 712dd110-763a-5547-8ef7-acda1414fdce
                                           last_changed 2025-11-25T20:05:05.443078+0000
                                           created 2025-11-25T20:05:05.443078+0000
                                           min_mon_release 18 (reef)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: handle_config config(9 keys) v1
Nov 25 20:57:19 compute-0 ceph-osd[91367]: set_mon_vals no callback set
Nov 25 20:57:19 compute-0 ceph-osd[91367]: mgrc handle_mgr_map Got map version 9
Nov 25 20:57:19 compute-0 ceph-osd[91367]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/446496168,v1:192.168.122.100:6801/446496168]
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:06.074337+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:07.074527+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:08.074710+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:09.074930+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:10.075184+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:11.075353+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:12.075508+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:13.075868+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:14.076070+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:15.076306+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:16.076529+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:17.076671+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:18.076870+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:19.077043+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:20.077232+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:21.077393+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:22.077667+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:23.078181+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:24.078356+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:25.078527+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:26.078661+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:27.078869+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:28.078997+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:29.079167+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:30.079332+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:31.079484+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:32.079648+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:33.079885+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:34.080113+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:35.080290+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:36.080450+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:37.080597+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:38.080858+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:39.081098+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:40.081300+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:41.081462+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:42.081606+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:43.081835+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:44.082015+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:45.082246+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:46.082563+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:47.082749+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:48.083158+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:49.083410+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:50.083592+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:51.083882+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:52.084103+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:53.084378+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:54.084598+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:55.084918+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:56.085108+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:57.085288+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:58.085494+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:59.085683+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:00.085919+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:01.086044+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:02.086227+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:03.086475+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:04.086687+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:05.087312+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:06.087493+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:07.087896+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:08.088150+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:09.088547+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:10.088772+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:11.088996+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:12.089214+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:13.089431+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:14.089684+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:15.090162+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:16.090412+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:17.090594+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:18.090762+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:19.090924+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:20.091089+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:21.091250+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:22.091445+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:23.091713+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:24.091927+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:25.092111+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:26.092257+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:27.092453+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:28.092633+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:29.092832+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:30.093000+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:31.093200+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:32.093434+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:33.093728+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:34.093904+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:35.094078+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:36.122748+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets getting new tickets!
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:37.123116+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _finish_auth 0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:37.124310+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:38.123261+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:39.123401+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:40.123570+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:41.123740+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:42.123926+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:43.124117+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:44.124318+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:45.124464+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:46.124603+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:47.124790+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:48.125074+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:49.125263+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:50.125426+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:51.125620+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:52.125844+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:53.126056+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:54.126271+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:55.126471+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:56.126699+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:57.126906+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:58.127057+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:59.127260+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:00.127476+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:01.127651+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:02.127838+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:03.128034+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:04.128189+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:05.128396+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:06.128541+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:07.128684+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:08.128883+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:09.129034+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:10.129195+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:11.129359+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:12.129532+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:13.129726+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:14.129930+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:15.130123+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:16.130290+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:17.130461+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:18.130610+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:19.130904+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:20.131114+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:21.131306+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:22.131473+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:23.131697+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:24.131911+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:25.132073+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:26.132291+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:27.132460+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:28.132642+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:29.132978+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:30.133150+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:31.133349+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:32.133536+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:33.133740+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:34.133893+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:35.134057+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:36.134186+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:37.134295+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:38.134546+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:39.134734+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:40.134961+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:41.135158+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:42.135367+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:43.135614+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:44.135834+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:45.136035+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:46.136266+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:47.136426+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:48.136578+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:49.136791+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:50.136989+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:51.137167+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:52.137335+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:53.137597+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:54.137838+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:55.138025+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:56.138223+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:57.138387+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:58.138624+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:59.138887+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:00.139116+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:01.139363+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:02.139510+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:03.139769+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:04.140081+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:05.140265+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:06.140477+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:07.140693+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:08.140886+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:09.141075+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:10.141330+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:11.141558+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:12.141768+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:13.142025+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:14.142205+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:15.142349+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:16.142509+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:17.142662+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:18.142828+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:19.142974+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:20.143173+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:21.143325+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:22.143496+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:23.143677+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:24.143875+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:25.144071+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:26.144251+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:27.144460+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:28.144605+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:29.144762+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:30.144930+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:31.145179+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:32.145330+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:33.145576+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:34.145736+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:35.145919+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:36.146090+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:37.146294+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:38.146464+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:39.146596+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:40.146746+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:41.146893+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:42.147069+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:43.147314+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:44.147519+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:45.147691+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:46.147885+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:47.148055+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:48.148294+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:49.148458+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:50.148663+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:51.148886+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:52.149092+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:53.149325+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:54.149570+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:55.149833+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:56.150094+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:57.150385+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:58.150708+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:59.151943+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:00.153070+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:01.153248+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:02.153713+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:03.154266+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:04.154608+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:05.154846+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:06.155068+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:07.155276+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:08.155492+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:09.155684+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:10.155865+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:11.156028+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:12.156287+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:13.156549+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:14.156875+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:15.157124+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:16.157282+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:17.157491+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:18.157672+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:19.157871+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:20.158045+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:21.158192+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:22.158916+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:23.159129+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:24.159398+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:25.159605+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:26.159765+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:27.159934+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:28.160077+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:29.160235+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:30.160623+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:31.164770+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:32.164979+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:33.166296+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:34.166867+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:35.167102+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:36.167284+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:37.167744+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:38.168140+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:39.168433+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:40.168895+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:41.169150+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:42.169359+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:43.170056+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:44.170381+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:45.170683+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:46.170885+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:47.171142+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:48.171334+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:49.171491+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:50.171710+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:51.171988+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:52.172140+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:53.172359+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:54.172517+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:55.172764+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:56.173092+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:57.173386+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:58.173653+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:59.173862+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:00.174029+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:01.174218+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:02.174391+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:03.174588+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:04.174737+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:05.174885+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:06.175042+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:07.175227+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:08.175367+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:09.175560+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:10.175726+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:11.175882+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:12.176025+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:13.176218+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:14.176411+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:15.176568+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:16.176703+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:17.176860+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:18.177030+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:19.177197+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:20.177365+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:21.177588+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:22.177868+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:23.178105+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:24.178264+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:25.178449+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:26.178588+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:27.178765+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:28.178943+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:29.179138+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:30.179302+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:31.179545+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:32.179686+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:33.179852+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:34.180006+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:35.180135+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:36.180299+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:37.180460+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:38.180617+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:39.180787+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:40.181000+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:41.181216+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:42.181386+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:43.181632+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:44.181873+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:45.182069+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:46.182238+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:47.182378+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:48.182583+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:49.182735+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:50.182905+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:51.183144+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:52.183354+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:53.183572+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:54.183732+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:55.183987+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:56.184157+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:57.184344+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:58.184507+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:59.184697+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:00.184889+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:01.185078+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:02.185221+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:03.185417+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:04.185614+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:05.185787+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:06.185987+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:07.186146+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-mon[75144]: from='client.14602 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mon[75144]: pgmap v1570: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:19 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2495314014' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3306931745' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mon[75144]: from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/204254951' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:08.186344+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:09.186523+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:10.186659+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:11.186896+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:12.187055+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:13.187317+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:14.187456+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:15.187649+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:16.188084+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:17.188309+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:18.188540+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:19.188846+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:20.189030+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:21.189204+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:22.189363+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:23.189570+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:24.189759+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:25.190058+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:26.190289+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:27.190478+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:28.190687+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:29.190894+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:30.191073+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:31.191263+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:32.191454+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:33.191699+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:34.191838+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:35.191989+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:36.192210+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 3955 writes, 18K keys, 3955 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3955 writes, 313 syncs, 12.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:37.192392+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:38.192622+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:39.192918+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:40.193113+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:41.193238+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:42.193474+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:43.193681+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:44.193863+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:19 compute-0 ceph-osd[91367]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:19 compute-0 ceph-osd[91367]: bluestore.MempoolThread(0x55b17da57b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 369261 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:45.194005+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:46.194126+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'config diff' '{prefix=config diff}'
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'config show' '{prefix=config show}'
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60456960 unmapped: 1351680 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:47.194248+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 1507328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:19 compute-0 ceph-osd[91367]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x998b2/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: tick
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_tickets
Nov 25 20:57:19 compute-0 ceph-osd[91367]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:48.194356+0000)
Nov 25 20:57:19 compute-0 ceph-osd[91367]: do_command 'log dump' '{prefix=log dump}'
Nov 25 20:57:19 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:57:19 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 20:57:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509499864' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 20:57:19 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744292640' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 20:57:19 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1509499864' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2744292640' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 20:57:20 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982165852' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1571: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 20:57:20 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128068611' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:20 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.828881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104240828936, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 663, "num_deletes": 255, "total_data_size": 493100, "memory_usage": 506840, "flush_reason": "Manual Compaction"}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104240833160, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 486446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31718, "largest_seqno": 32380, "table_properties": {"data_size": 482866, "index_size": 1360, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8439, "raw_average_key_size": 19, "raw_value_size": 475570, "raw_average_value_size": 1088, "num_data_blocks": 60, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764104196, "oldest_key_time": 1764104196, "file_creation_time": 1764104240, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4322 microseconds, and 2210 cpu microseconds.
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.833212) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 486446 bytes OK
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.833228) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.835017) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.835034) EVENT_LOG_v1 {"time_micros": 1764104240835029, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.835051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 489476, prev total WAL file size 489476, number of live WAL files 2.
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.835475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303031' seq:72057594037927935, type:22 .. '6C6F676D0031323532' seq:0, type:0; will stop at (end)
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(475KB)], [74(4571KB)]
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104240835514, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 5167181, "oldest_snapshot_seqno": -1}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 4498 keys, 5067757 bytes, temperature: kUnknown
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104240876584, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 5067757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5039774, "index_size": 15650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112760, "raw_average_key_size": 25, "raw_value_size": 4960853, "raw_average_value_size": 1102, "num_data_blocks": 655, "num_entries": 4498, "num_filter_entries": 4498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764101107, "oldest_key_time": 0, "file_creation_time": 1764104240, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e268f949-cc37-4e61-bd9c-5215f99d2d7b", "db_session_id": "BBUKM01M1VKNQ9NGVXH7", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.876956) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 5067757 bytes
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.878244) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.5 rd, 123.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 4.5 +0.0 blob) out(4.8 +0.0 blob), read-write-amplify(21.0) write-amplify(10.4) OK, records in: 5020, records dropped: 522 output_compression: NoCompression
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.878273) EVENT_LOG_v1 {"time_micros": 1764104240878261, "job": 42, "event": "compaction_finished", "compaction_time_micros": 41171, "compaction_time_cpu_micros": 25459, "output_level": 6, "num_output_files": 1, "total_output_size": 5067757, "num_input_records": 5020, "num_output_records": 4498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104240878551, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764104240880200, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.835394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.880346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.880353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.880356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.880358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:57:20 compute-0 ceph-mon[75144]: rocksdb: (Original Log Time 2025/11/25-20:57:20.880361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 20:57:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 20:57:21 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78550499' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/982165852' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mon[75144]: from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mon[75144]: pgmap v1571: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:21 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/128068611' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mon[75144]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/78550499' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14636 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 20:57:21 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320022411' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 20:57:21 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:22 compute-0 ceph-mon[75144]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:22 compute-0 ceph-mon[75144]: from='client.14636 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:22 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3320022411' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 20:57:22 compute-0 ceph-mon[75144]: from='client.14638 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:22 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1572: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 20:57:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614921163' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 20:57:22 compute-0 crontab[284659]: (root) LIST (root)
Nov 25 20:57:22 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14646 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:22 compute-0 ceph-mgr[75443]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 20:57:22 compute-0 ceph-712dd110-763a-5547-8ef7-acda1414fdce-mgr-compute-0-hdjasd[75439]: 2025-11-25T20:57:22.741+0000 7f92c2df5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 20:57:22 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 20:57:22 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806572694' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: pgmap v1572: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:23 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1614921163' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: from='client.14646 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2806572694' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 20:57:23 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410680739' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 20:57:23 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359322773' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 20:57:23 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450518948' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 20:57:23 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682775689' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:04.145790+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:05.145998+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:06.146266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:07.146445+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:08.146600+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:09.146768+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:10.146916+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:11.147045+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:12.147170+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:13.147309+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:14.147499+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:15.147657+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:16.147835+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:17.148010+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:18.148165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:19.148331+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:20.148468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:21.148659+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:22.148847+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:23.149057+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:24.149237+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:25.149370+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:26.149564+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:27.149749+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:28.149933+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:29.150111+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:30.150307+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:31.150585+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:32.150738+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:33.150919+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:34.151063+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:35.151192+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:36.151362+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:37.151598+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:38.151757+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:39.151972+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:40.152113+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:41.152350+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:42.152550+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:43.152719+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:44.152969+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:45.153106+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:46.153286+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:47.153434+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:48.153675+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:49.153856+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:50.154019+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:51.154195+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:52.154347+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:53.154512+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:54.154776+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:55.155023+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:56.155183+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:57.155335+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:58.155437+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:59.155669+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:00.155784+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:01.155936+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:02.156115+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:03.156293+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:04.156593+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:05.156832+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:06.157006+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:07.157177+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:08.157323+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:09.157482+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:10.157702+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:11.157907+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:12.158078+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:13.158338+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:14.158470+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:15.158602+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:16.158857+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:17.159013+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:18.159179+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:19.159351+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:20.159501+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:21.159644+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:22.159771+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:23.159847+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:24.159992+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:25.160136+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:26.160338+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:27.161071+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:28.161197+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:29.161404+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:30.162337+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:31.162467+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:32.162870+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:33.162998+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:34.163119+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:35.163242+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:36.163402+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:37.163539+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:38.163682+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:39.163877+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:40.164402+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:41.164608+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:42.165174+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:43.165679+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:44.165844+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:45.166181+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:46.166399+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:47.166940+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:48.167155+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:49.167331+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:50.167589+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:51.167782+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:52.167995+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:53.168199+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:54.168321+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:55.168604+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:56.168922+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:57.169159+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:58.171253+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:59.171434+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:00.171625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:01.171861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:02.172000+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:03.172247+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:04.172389+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:05.173388+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:06.173635+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:07.173837+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:08.174582+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:09.175277+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:10.175445+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:11.175626+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:12.175784+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:13.176032+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:14.176181+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:15.176625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:16.177189+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:17.177473+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:18.177674+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:19.177913+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:20.178327+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:21.178721+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:22.178956+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:23.179178+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:24.179335+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:25.179593+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:26.179978+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:27.180473+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:28.180648+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:29.180768+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:30.180985+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:31.181165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:32.181298+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:33.181448+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:34.181625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:35.181833+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:36.182073+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:37.182210+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:38.182357+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:39.182574+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:40.182745+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:41.182944+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:42.183069+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:43.183175+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:44.183321+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:45.183533+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:46.183750+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:47.183901+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:48.184214+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:49.184387+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:50.184579+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:51.184752+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:52.184893+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:53.185088+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:54.185275+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:55.185427+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:56.185644+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:57.185837+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:58.186005+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:59.186194+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:00.186456+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:01.186623+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:02.186825+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:03.187022+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:04.187170+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:05.187328+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:06.187620+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:07.187894+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:08.188038+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:09.188233+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:10.188465+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:11.188717+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:12.188924+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:13.189079+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:14.189226+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:15.189366+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:16.189537+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:17.189658+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:18.189829+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:19.189965+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:20.190133+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:21.190338+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:22.190514+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:23.190642+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:24.190767+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:25.190939+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:26.191151+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:27.191324+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:28.191459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:29.191659+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:30.191873+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:31.192140+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5590572f2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:32.192318+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:33.192522+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:34.192691+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:35.192902+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:36.193074+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:37.193308+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:38.193467+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:39.194064+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:40.194519+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:41.194771+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:42.194874+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:43.195438+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:44.195669+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:45.196161+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:46.196514+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:47.196715+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:48.196949+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:49.197148+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:50.197302+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:51.197453+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:52.197633+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:53.197888+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:54.198124+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:55.198276+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:56.198508+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:57.198691+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:58.198887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:59.199068+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:00.199253+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:01.199421+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:02.199650+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:03.200066+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:04.200189+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:05.200356+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:06.200562+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:07.200736+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:08.200907+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:09.201105+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:10.201321+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:11.201502+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:12.201687+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:13.201890+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:14.202139+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:15.202331+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:16.202656+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:17.202880+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:18.203012+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:19.203167+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:20.203334+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:21.203536+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:22.203721+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:23.203932+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:24.204117+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:25.204316+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:26.204511+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:27.204662+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:28.204881+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:29.205086+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:30.205222+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:31.205390+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:32.205519+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:33.205718+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:34.205885+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:35.206061+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:36.206256+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:37.206445+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:38.206624+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:39.206909+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:40.207093+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:41.207303+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:42.207509+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:43.207696+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:44.207875+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:45.208022+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:46.208236+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:47.208374+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:48.208553+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:49.208762+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:50.208869+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:51.209032+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:52.209259+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:53.209441+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:54.209614+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:55.209744+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:56.209924+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:57.210141+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:58.210355+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:59.210507+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:00.210703+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:01.210862+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:02.211135+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:03.211296+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:04.211446+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:05.211614+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:06.211831+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:07.211996+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:08.212163+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:09.212296+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:10.212516+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:11.212751+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:12.212960+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:13.213160+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:14.213327+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:15.213495+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:16.213725+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:17.213974+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:18.214163+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:19.214401+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:20.214657+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:21.214835+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:22.215078+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:23.215270+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:24.215459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:25.215624+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:26.215876+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:27.216052+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:28.216240+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:29.216459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:30.216688+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:31.216869+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:32.217015+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:33.217205+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:34.217404+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:35.217570+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:36.217841+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:37.218012+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:38.218190+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:39.218392+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:40.218574+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:41.218757+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:42.218915+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:43.219248+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:44.219429+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:45.219709+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:46.220034+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:47.220219+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:48.220472+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:49.220677+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:50.220973+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:51.221179+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:52.221454+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:53.221620+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:54.221766+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:55.221897+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:56.222155+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:57.222298+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:58.222428+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:59.222581+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:00.222754+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:01.222992+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:02.223219+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:03.223354+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:04.223542+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:05.223706+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:06.223895+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:07.224057+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:08.224239+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:09.224445+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:10.224621+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:11.224853+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:12.225050+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:13.225259+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:14.225459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:15.225645+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:16.225859+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:17.226034+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:18.226188+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:19.226352+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:20.226588+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:21.226770+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:22.226970+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:23.227165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:24.227322+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:25.227502+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:26.227747+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:27.227935+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:28.228133+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:29.228331+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:30.228520+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:31.228714+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:32.228865+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:33.229014+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:34.229183+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:35.229343+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:36.229514+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:37.229738+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:38.229943+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:39.230127+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:40.230343+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:41.230515+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:42.230694+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:43.230890+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:44.231074+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:45.231259+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:46.231448+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:47.231598+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:48.231773+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:49.231953+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:50.232145+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:51.232353+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:52.232501+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:53.232694+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:54.232895+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:55.233116+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:56.233333+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:57.233500+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:58.233718+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:59.233913+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:00.234103+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:01.234271+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:02.234444+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:03.234576+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:04.234753+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:05.234903+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:06.235110+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:07.235243+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:08.235388+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:09.235584+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:10.235762+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:11.235922+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:12.236106+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:13.236320+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:14.236483+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:15.236644+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:16.236876+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:17.237087+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:18.237300+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:19.237504+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:20.237694+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:21.237860+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:22.238102+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:23.238263+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:24.238465+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:25.238738+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:26.239063+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:27.239313+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:28.239486+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:29.239643+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:30.239851+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:31.240080+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:32.240270+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:33.240476+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:34.240680+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:35.240885+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:36.241112+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:37.241297+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:38.241540+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:39.241720+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:40.242052+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:41.242202+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:42.242385+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:43.242555+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:44.242736+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:45.242917+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:46.243165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:47.243322+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:48.244459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:49.244878+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:50.245037+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:51.245217+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:52.246055+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:53.246443+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:54.247448+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:55.247845+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:56.248217+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:57.248422+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:58.248724+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:59.249411+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:00.249636+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:01.249878+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:02.250139+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:03.250459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:04.250854+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:05.251030+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:06.251313+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:07.251538+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:08.251726+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:09.251923+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:10.252099+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:11.252266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:12.252506+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:13.252702+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:14.252877+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:15.253092+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:16.253373+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:17.253519+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:18.253813+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:19.254266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:20.254459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:21.254606+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:22.254892+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:23.255165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:24.255410+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:25.255609+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:26.255851+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:27.256069+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:28.256310+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:29.256522+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:30.256747+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:31.256983+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:32.257170+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:33.257325+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:34.257482+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:35.257646+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:36.257879+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:37.258052+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:38.258270+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:39.258582+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:40.258896+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:41.259047+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:42.259185+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:43.259392+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:44.259570+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:45.259765+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:46.260034+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:47.260234+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:48.260398+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:49.260625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:50.260856+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:51.261045+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:52.261252+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:53.261458+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:54.261628+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:55.261852+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:56.262169+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:57.266941+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:58.271060+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:59.273885+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:00.276135+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:01.276920+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:02.277425+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:03.277710+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:04.278545+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:05.279105+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:06.279434+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:07.279577+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:08.279879+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:09.280024+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:10.280380+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:11.280711+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:12.280998+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:13.281125+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:14.281295+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:15.281566+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:16.281879+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:17.282153+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:18.282366+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:19.282520+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:20.282687+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:21.282852+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:22.282992+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:23.283151+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:24.283304+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:25.283535+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:26.284188+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:27.284312+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:28.284531+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:29.284666+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:30.284875+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:31.285062+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:32.286079+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:33.286247+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:34.286402+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:35.286563+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:36.286861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:37.287072+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:38.287270+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:39.287434+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:40.287618+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:41.287871+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:42.288064+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:43.288209+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:44.288402+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:45.288573+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:46.288773+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:47.288950+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:48.289124+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:49.289301+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:50.289473+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:51.289667+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:52.289867+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:53.290043+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:54.290188+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:55.290381+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:56.290599+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:57.290835+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:58.291017+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:59.291198+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:00.291396+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:01.291540+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:02.291787+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:03.292078+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:04.292355+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:05.292548+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:06.292928+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:07.293268+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:08.293553+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:09.293776+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:10.293983+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:11.294096+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:12.294309+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:13.294527+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:14.294719+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:15.294887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:16.295106+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:17.295328+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:18.295576+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:19.295742+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:20.298148+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:21.298311+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:22.298517+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:23.298717+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:24.298954+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:25.299096+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:26.299323+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:27.299488+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:28.299670+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:29.299935+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:30.300063+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:31.300228+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:32.300359+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:33.300487+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:34.300691+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:35.300894+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:36.301104+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:37.301288+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:38.301457+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:39.301594+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:40.301746+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:41.301922+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:42.302070+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:43.302232+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:44.302401+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:45.302556+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:46.302789+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:47.303013+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:48.303218+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:49.303397+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:50.303592+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:51.303753+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:52.303898+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:53.304083+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:54.304203+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:55.304358+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:56.304560+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:57.304716+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:58.305357+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:59.305556+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:00.305780+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:01.305985+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:02.306127+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:03.306286+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:04.307399+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:05.307583+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:06.307883+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:07.309505+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:08.311073+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:09.311590+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:10.312785+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:11.313310+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:12.314262+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:13.314616+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:14.315172+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:15.315501+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:16.315893+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:17.316740+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:18.317363+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:19.317750+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:20.318427+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:21.318747+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:22.319369+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:23.319929+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:24.320451+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:25.320849+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:26.321093+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:27.321507+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:28.321887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:29.322192+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:30.322460+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:31.322705+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:32.322939+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:33.323085+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:34.323395+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:35.323710+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:36.324053+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:37.324345+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:38.324571+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:39.324733+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:40.324939+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:41.325125+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:42.325342+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:43.325500+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:44.325746+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:45.325928+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:46.326096+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:47.326271+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:48.326506+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:49.326698+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:50.326959+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:51.327184+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:52.327479+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:53.327753+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:54.327879+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:55.328126+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:56.328358+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:57.328525+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:58.328709+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:59.328889+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:00.329088+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:01.329223+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:02.329431+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:03.329625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:04.329791+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:05.330016+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:06.330251+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:07.330438+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:08.330634+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:09.330779+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:10.330985+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:11.331266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:12.331683+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:13.331970+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:14.332165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:15.332363+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:16.333002+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:17.333483+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:18.333996+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:19.334499+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:20.334711+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:21.335092+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:22.335501+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:23.335971+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:24.336388+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:25.336853+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:26.337138+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:27.337448+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:28.337621+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:29.337852+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:30.338345+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:31.338602+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:32.338884+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:33.339107+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:34.339293+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:35.339455+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:36.339711+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:37.339997+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:38.340202+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:39.340452+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:40.340715+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:41.340920+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:42.341112+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:43.341267+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:44.341432+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:45.341611+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:46.341896+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:47.342078+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:48.342261+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:49.342431+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:50.342599+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:51.342783+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:52.342952+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:53.343097+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:54.343213+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:55.343369+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:56.343495+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:57.343637+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:58.343858+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:59.344052+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:00.344253+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:01.344406+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:02.344570+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:03.344865+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:04.345054+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:05.345263+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:06.345508+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:07.345718+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:08.345882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:09.346058+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:10.346268+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:11.346435+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:12.346602+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:13.346740+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:14.346926+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:15.347947+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:16.350746+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:17.352561+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:18.353624+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:19.355340+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:20.356425+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:21.357635+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:22.358026+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:23.358269+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:24.358713+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:25.359598+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:26.359871+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:27.361558+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:28.362112+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:29.362397+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:30.362592+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:31.363077+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:32.363335+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:33.363571+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:34.363775+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:35.363971+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:36.364146+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x559057449c00
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/446496168
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/446496168,v1:192.168.122.100:6801/446496168]
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: get_auth_request con 0x559058ed3400 auth_method 0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:37.364320+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:38.364543+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:39.364654+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:40.364884+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:41.365119+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:42.365336+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:43.365579+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:44.365866+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:45.366079+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:46.366254+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:47.366384+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:48.366581+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:49.366704+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:50.366847+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:51.367004+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:52.367165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:53.367412+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:54.367576+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:55.367775+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:56.367994+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:57.368193+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:58.368345+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:59.368498+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:00.368675+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:01.368862+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:02.368989+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:03.369178+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:04.369382+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:05.369528+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:06.369690+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:07.369854+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:08.370364+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:09.370567+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:10.370728+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:11.370946+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:12.371128+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:13.371271+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:14.371403+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:15.371574+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:16.371785+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:17.371990+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:18.372171+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:19.372361+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:20.372553+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:21.372755+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:22.372952+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:23.373199+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:24.373434+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:25.373690+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:26.374000+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:27.374186+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:28.374379+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:29.374541+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:30.374720+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:31.374927+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:32.375111+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:33.375293+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:34.375468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:35.375672+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:36.375945+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:37.376139+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:38.376311+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:39.376541+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:40.376718+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:41.376920+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:42.377119+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:43.377352+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:44.377558+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:45.377746+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:46.378156+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:47.378316+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:48.378549+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:49.378845+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:50.379062+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:51.379256+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:52.379450+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:53.379657+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:54.379923+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:55.380163+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:56.380458+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:57.380629+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:58.380886+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:59.381063+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:00.381274+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:01.381509+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:02.381735+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:03.381931+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:04.382142+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:05.382313+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:06.382583+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:07.382910+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:08.383204+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:09.383498+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:10.383899+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:11.384219+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:12.384414+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:13.384621+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:14.384883+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:15.385193+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:16.385452+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:17.385668+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:18.385942+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:19.386213+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:20.386398+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:21.386588+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:22.386743+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:23.386971+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:24.387138+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:25.387329+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:26.387555+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:27.387736+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:28.387878+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:29.388093+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:30.388257+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:31.388383+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:32.388546+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:33.388720+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:34.388879+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:35.389044+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:36.389272+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:37.389386+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:38.389561+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:39.389737+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:40.389921+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:41.390086+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:42.390227+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:43.390451+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:44.390619+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:45.390861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:46.391096+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:47.391302+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:48.391485+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:49.391647+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:50.391871+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:51.392045+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:52.392237+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:53.392463+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:54.392608+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:55.392768+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:56.393000+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:57.393226+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:58.393522+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:59.393688+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:00.393882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:01.394019+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:02.394134+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:03.394326+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:04.394511+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:05.394657+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:06.394887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:07.395086+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:08.395266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:09.395475+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:10.395654+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:11.395860+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:12.396013+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:13.396187+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:14.396350+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:15.396509+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:16.396713+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:17.396868+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:18.396984+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:19.397149+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:20.397300+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:21.397447+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:22.397596+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:23.397730+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:24.397914+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:25.398070+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:26.398636+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:27.398819+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:28.398987+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:29.399164+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:30.399421+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:31.399730+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:32.399893+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:33.400065+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:34.400269+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:35.400517+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:36.400736+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:37.401506+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:38.401691+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:39.401910+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:40.402091+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:41.402251+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:42.402584+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:43.402788+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:44.403238+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:45.403488+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:46.403894+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:47.404239+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:48.404625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:49.404983+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:50.405238+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:51.405464+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:52.405977+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:53.406366+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:54.406540+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:55.406692+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:56.406993+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:57.407175+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:58.407378+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:59.407583+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:00.407747+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:01.407944+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:02.408163+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:03.408365+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:04.408531+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:05.408742+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:06.409030+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:07.409230+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:08.409419+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:09.409653+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:10.409872+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:11.410032+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:12.410221+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:13.410430+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:14.410581+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:15.410755+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:16.410984+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:17.411137+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:18.411333+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:19.411537+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:20.411739+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:21.411926+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:22.412155+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:23.412366+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:24.412540+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:25.412735+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:26.413057+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:27.413273+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:28.413464+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:29.413637+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:30.413856+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:31.413979+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:32.414122+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:33.414285+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:34.414473+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:35.414619+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:36.415190+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:37.415426+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:38.415592+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:39.415889+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:40.416100+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:41.416347+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:42.416617+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:43.417105+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:44.417492+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:45.417712+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:46.418001+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:47.418255+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:48.418495+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:49.418761+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:50.419064+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:51.419284+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:52.419592+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:53.419844+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:54.420052+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:55.420242+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:56.420614+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:57.420908+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:58.421097+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:59.421362+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:00.421535+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:01.421783+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:02.422054+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:03.422248+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:04.422453+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:05.422629+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:06.422890+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:07.423106+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:08.423306+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:09.423479+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:10.423668+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:11.423901+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:12.424121+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:13.424359+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:14.424542+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:15.424747+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:16.425050+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:17.425252+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:18.425383+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:19.425592+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:20.425946+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:21.426157+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:22.426299+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:23.426487+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:24.426649+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:25.426894+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:26.427151+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:27.427336+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:28.427546+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:29.427734+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:30.427888+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:31.428083+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:32.428291+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:33.428487+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:34.428709+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:35.428958+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:36.429229+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:37.429554+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:38.429843+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:39.430018+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:40.430281+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:41.430489+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:42.430671+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:43.430858+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:44.431116+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:45.431329+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:46.431566+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:47.431742+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:48.431882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:49.432068+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:50.432317+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:51.432569+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:52.432789+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:53.433046+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:54.433279+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:55.433465+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:56.433710+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:57.433896+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:58.434076+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:59.434281+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:00.434462+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:01.434653+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:02.434870+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:03.435096+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:04.435293+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:05.435498+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:06.438196+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:07.438355+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:08.438553+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:09.438729+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:10.438941+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:11.439163+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:12.439370+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:13.439551+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:14.439772+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:15.440019+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:16.440246+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:17.440418+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:18.440588+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:19.440878+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:20.441066+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:21.441266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:22.441486+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:23.441729+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:24.441905+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:25.442132+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:26.442437+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:27.442720+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:28.442914+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:29.443127+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:30.443300+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:31.443603+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:32.443858+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:33.444062+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:34.444303+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:35.444508+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:36.444783+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:37.445026+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:38.445251+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:39.445434+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:40.445656+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:41.445892+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:42.446038+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:43.446276+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:44.446468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:45.446609+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:46.446878+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:47.447116+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:48.447330+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:49.447523+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:50.447682+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:51.447849+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:52.447979+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:53.448158+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:54.448319+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:55.448468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:56.448719+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:57.448934+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:58.449153+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:59.449345+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:00.449539+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:01.449685+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:02.449864+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:03.450050+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:04.450275+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:05.460631+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:06.463462+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:07.463695+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:08.463963+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:09.464159+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:10.464343+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:11.464518+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:12.464692+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:13.464883+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:14.465131+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:15.465350+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:16.465686+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:17.465881+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:18.466269+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:19.466424+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:20.466633+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:21.466825+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:22.467006+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:23.467175+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:24.467362+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:25.467623+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:26.467885+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:27.468135+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:28.468373+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:29.468565+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:30.468866+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:31.469054+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:32.469229+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:33.469447+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:34.469625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:35.469786+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:36.470572+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:37.470878+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:38.471017+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:39.471242+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:40.471426+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:41.471580+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:42.471790+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:43.472115+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:44.472293+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:45.472493+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:46.472724+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:47.472903+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:48.473078+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:49.473291+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:50.473438+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:51.473615+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:52.473840+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:53.474024+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:54.474198+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:55.474411+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:56.474657+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:57.474886+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:58.475050+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:59.475287+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:00.475505+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:01.475658+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:02.475823+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:03.476037+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:04.476241+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:05.476448+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:06.476716+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:07.477007+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:08.477265+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:09.477443+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:10.477718+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:11.477991+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:12.478222+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:13.478525+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:14.478889+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:15.479161+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:16.479419+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:17.479727+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:18.480018+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:19.480496+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:20.480746+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:21.480915+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:22.481357+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:23.481716+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:24.482168+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:25.482488+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:26.482664+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:27.483025+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:28.483290+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:29.483478+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:30.483749+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:31.483961+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:32.484125+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:33.484403+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:34.484626+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:35.484891+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:36.485074+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:37.485216+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:38.485375+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:39.485663+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:40.485885+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:41.486108+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:42.486277+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:43.486594+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:44.486893+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:45.487147+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:46.487407+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:47.487585+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:48.487856+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:49.488074+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:50.489366+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:51.490144+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:52.491384+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:53.491729+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:54.492753+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:55.493258+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:56.494017+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:57.494676+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:58.495011+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:59.495600+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:00.496112+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:01.496475+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:02.496875+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:03.497210+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:04.497498+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:05.497669+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:06.497889+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:07.498178+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:08.498404+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:09.498597+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:10.498780+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:11.499276+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:12.499458+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:13.499652+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:14.499949+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:15.500163+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:16.500370+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:17.500558+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:18.500714+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:19.500885+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:20.501089+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:21.501355+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:22.501572+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:23.501861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:24.502092+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:25.502276+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:26.502519+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:27.502899+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:28.503123+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:29.503341+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:30.503505+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:31.503698+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:32.503894+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:33.504081+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:34.504311+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:35.504463+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:36.504688+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:37.504887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:38.505056+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:39.505289+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:40.505545+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:41.505738+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:42.505962+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:43.506215+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:44.506468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:45.506654+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:46.506886+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:47.507040+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:48.507199+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:49.507368+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:50.507553+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:51.507710+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:52.507883+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:53.508010+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:54.508165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:55.508557+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:56.508786+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:57.509315+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:58.509760+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:59.510558+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:00.511101+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:01.511539+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:02.512213+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:03.512404+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:04.512771+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:05.512961+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:06.513347+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:07.513533+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:08.513705+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:09.513951+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:10.514220+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:11.515233+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:12.515379+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:13.515694+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:14.515888+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:15.516031+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:16.516231+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:17.516433+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:18.516579+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:19.516764+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:20.516928+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:21.517120+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:22.517287+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:23.517472+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:24.517616+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:25.517779+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:26.518024+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:27.518172+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:28.518354+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:29.518498+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:30.518680+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:31.518907+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:32.519102+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:33.519274+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:34.519395+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:35.519540+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:36.519748+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:37.519952+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:38.520086+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:39.520244+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:40.520406+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:41.520541+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:42.520676+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:43.520787+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:44.520957+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:45.521144+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:46.521333+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:47.521515+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:48.521901+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:49.522092+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:50.522287+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:51.522423+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:52.522621+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:53.522791+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:54.522965+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:55.523138+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:56.523309+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:57.523489+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:58.523657+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:59.523827+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:00.523955+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:01.524166+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:02.524345+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:03.524515+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:04.524738+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:05.524924+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:06.525150+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:07.525344+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:08.525535+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:09.525711+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:10.525874+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:11.525957+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:12.526165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:13.526317+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:14.526489+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:15.526646+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:16.526839+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:17.526995+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:18.527325+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:19.527485+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:20.527645+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:21.527854+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:22.528085+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:23.528255+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:24.528463+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:25.528617+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:26.528830+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:27.528987+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:28.529161+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:29.529316+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:30.529470+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:31.529620+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:32.529742+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:33.534353+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:34.534506+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:35.534681+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:36.534871+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:37.535058+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:38.535239+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:39.535441+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:40.535607+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:41.535857+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:42.535949+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:43.536175+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:44.536288+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:45.536598+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:46.536763+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:47.536905+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:48.537101+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:49.537232+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:50.537323+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:51.537477+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:52.537633+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:53.537783+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:54.538003+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:55.538182+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:56.538451+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:57.538604+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:58.538766+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:59.538922+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:00.539096+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:01.539224+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:02.539352+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:03.539512+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:04.539678+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:05.539843+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:06.540045+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:07.540392+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:08.540567+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:09.540788+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:10.541044+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:11.541203+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:12.541385+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:13.541531+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:14.541673+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:15.541854+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:16.542049+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:17.542276+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:18.542423+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:19.542542+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:20.542720+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:21.542852+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:22.543010+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:23.543186+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:24.543349+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:25.543522+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:26.543703+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:27.543861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:28.544057+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:29.544248+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:30.544410+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:31.544555+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:32.544704+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:33.544903+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:34.545083+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:35.545226+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:36.545395+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:37.547034+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:38.547160+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:39.547283+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:40.547417+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:41.547555+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:42.547673+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:43.547875+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:44.548044+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:45.548224+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:46.548493+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:47.548668+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:48.548875+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:49.549021+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:50.549275+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:51.549473+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:52.549685+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:53.549881+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:54.550056+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:55.550276+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:56.550536+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:57.550693+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:58.550854+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:59.550986+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:00.551169+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:01.551380+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:02.551547+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:03.551718+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:04.551893+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:05.552052+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:06.552314+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:07.552483+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:08.552653+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:09.552867+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:10.553048+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:11.553213+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:12.553428+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:13.553595+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:14.553754+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:15.553925+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:16.554117+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:17.554303+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:18.554490+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:19.554620+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:20.554791+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:21.555016+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:22.555180+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:23.555373+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:24.555537+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:25.555690+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:26.555894+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:27.556053+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:28.556218+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:29.556396+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:30.556555+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:31.556700+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:32.556870+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:33.557066+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:34.557211+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:35.557359+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:36.557582+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:37.557758+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:38.557904+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:39.558063+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:40.558377+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:41.558551+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:42.558909+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:43.559482+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:44.560074+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:45.560268+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:46.560496+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:47.560653+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:48.560836+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:49.561541+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:50.561702+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:51.561952+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:52.562171+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:53.562702+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:54.563758+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:55.564014+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:56.564217+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:57.564667+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:58.565054+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:59.565225+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:00.565437+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:01.565627+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:02.565873+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:03.566137+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:04.566376+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:05.566548+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:06.566834+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:07.567125+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:08.567344+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:09.567519+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:10.567765+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:11.568011+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:12.568172+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:13.568356+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:14.568550+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:15.568755+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:16.569109+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:17.569297+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:18.569438+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:19.569589+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:20.569761+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:21.569873+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:22.569997+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:23.570142+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:24.570304+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:25.571589+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:26.571831+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:27.571984+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:28.572189+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:29.572379+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:30.572561+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:31.572741+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:32.572921+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:33.573067+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:34.573253+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:35.573404+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:36.573606+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:37.573778+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:38.573972+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:39.574172+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:40.574359+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:41.574536+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:42.574683+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:43.574876+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:44.575015+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:45.575495+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:46.575740+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:47.576369+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:48.576545+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:49.576770+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:50.576985+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:51.577259+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:52.577483+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:53.577626+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:54.577766+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:55.577860+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:56.578109+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:57.578235+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:58.578427+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:59.578601+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:00.578758+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:01.578975+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:02.579170+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:03.579354+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:04.579496+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:05.579682+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:06.579918+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:07.580077+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:08.580238+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:09.580418+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:10.580623+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:11.580845+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:12.581000+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:13.581159+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:14.581427+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:15.581602+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:16.581924+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:17.582097+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:18.582297+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:19.582450+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:20.582584+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:21.582753+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:22.582931+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:23.583115+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:24.583285+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:25.583498+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:26.583720+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:27.583895+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:28.584116+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:29.584269+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:30.584369+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets getting new tickets!
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:31.584625+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _finish_auth 0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:31.585869+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:32.584786+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:33.585041+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:34.585208+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:35.585369+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:36.585536+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x559058ed3400
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/446496168
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/446496168,v1:192.168.122.100:6801/446496168]
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: get_auth_request con 0x55905acf9c00 auth_method 0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:37.585748+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:38.585972+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:39.586138+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:40.586266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:41.586408+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:42.586601+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:43.586761+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:44.586965+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:45.587186+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:46.587364+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:47.587539+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:48.587709+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:49.587873+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:50.588067+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:51.588281+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:52.588401+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:53.588563+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:54.588724+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:55.588864+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:56.589059+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:57.589221+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:58.589378+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:59.589511+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:00.589692+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:01.768126+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:02.768302+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:03.768471+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:04.768626+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:05.768785+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:06.769031+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:07.769209+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:08.769355+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:09.769507+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:10.769672+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:11.769887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:12.770057+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:13.770230+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:14.770447+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:15.770639+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:16.770966+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:17.771166+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:18.771398+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:19.771607+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:20.771896+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:21.772065+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:22.772269+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:23.772511+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:24.772756+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:25.772900+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:26.773121+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:27.773350+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:28.773512+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:29.773662+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:30.773863+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:31.774061+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:32.774252+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:33.774424+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:34.774571+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:35.774882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:36.775068+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:37.775279+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:38.775459+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:39.775601+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:40.775770+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:41.775979+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:42.776164+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:43.776325+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:44.776500+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:45.776634+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:46.776887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:47.777045+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:48.777212+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:49.777364+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:50.777601+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:51.777765+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:52.777935+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:53.778104+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:54.778338+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:55.778496+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:56.778746+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:57.778952+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:58.779127+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:59.779304+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:00.779572+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:01.779748+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:02.779934+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:03.780131+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:04.780332+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:05.780543+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:06.780758+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:07.780930+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:08.781108+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:09.781263+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:10.781429+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:11.781583+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:12.781727+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:13.781918+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:14.782152+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:15.782327+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:16.782511+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:17.782694+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:18.782888+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:19.783044+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:20.784026+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:21.784193+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:22.784339+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:23.784503+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:24.784683+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:25.784916+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:26.785154+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:27.785342+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:28.785506+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:29.785895+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:30.786088+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:31.786317+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:32.786468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:33.786620+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:34.786786+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:35.787009+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:36.787211+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:37.787389+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:38.787560+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:39.787713+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:40.787882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:41.788095+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:42.788309+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:43.788495+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:44.788681+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:45.788861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:46.789091+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:47.789295+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:48.789429+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:49.789572+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:50.789684+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:51.789933+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:52.790128+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:53.790305+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:54.790507+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:55.790678+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:56.791020+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:57.791468+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:58.791763+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:59.792208+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:00.792570+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:01.792961+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:02.794195+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:03.794469+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:04.794652+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:05.794864+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:06.795104+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:07.795344+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:08.795575+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:09.795853+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:10.796079+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:11.796267+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:12.796520+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:13.796719+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:14.796889+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:15.797091+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:16.797385+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:17.797643+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:18.798017+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:19.798321+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:20.798637+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:21.798969+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:22.799200+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:23.799478+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:24.799844+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:25.800078+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:26.800349+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:27.800577+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:28.800761+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:29.800884+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:30.802926+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:31.803915+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:32.804732+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:33.806056+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:34.806547+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:35.806838+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:36.807266+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:37.807882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:38.808388+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:39.808712+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:40.808891+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:41.809584+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:42.809868+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:43.810309+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:44.810737+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:45.810967+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:46.811271+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:47.811480+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:48.811893+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:49.812194+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:50.812441+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:51.812745+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:52.812914+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:53.813133+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:54.813300+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:55.813538+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:56.813742+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:57.813945+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:58.814183+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:59.814414+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:00.814626+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:01.814840+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:02.814960+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:03.815165+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:04.815268+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:05.815383+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:06.815598+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:07.815839+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:08.815974+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:09.816120+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:10.816275+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:11.816443+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:12.816612+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:13.816763+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:14.816899+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:15.817077+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:16.817269+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:17.817446+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:18.817687+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:19.817875+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:20.818041+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:21.818202+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:22.818370+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:23.818515+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:24.818672+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:25.818877+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:26.819114+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:27.819275+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:28.819469+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:29.819626+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:30.819777+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:31.819964+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:32.820089+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:33.820261+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:34.820412+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:35.820566+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:36.820771+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:37.820910+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:38.821057+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:39.821230+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:40.821394+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:41.821576+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:42.821789+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:43.822019+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:44.822203+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:45.822420+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:46.822665+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:47.822861+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:48.823031+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:49.823217+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:50.823438+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:51.823592+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:52.823749+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:53.823890+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:54.824116+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:55.824268+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 20:57:23 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239104025' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:56.824467+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:57.824635+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:58.824887+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:59.825063+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:00.825231+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:01.825407+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:02.825578+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:03.825788+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:04.826005+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:05.826164+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:06.826393+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:07.826548+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:08.826697+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:09.826842+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:10.827020+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:11.827190+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:12.827408+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:13.827594+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:14.827749+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:15.827917+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:16.828115+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:17.828292+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:18.828510+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:19.828739+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:20.828925+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:21.829101+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:22.829309+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:23.829485+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:24.829717+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:25.829877+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:26.830237+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:27.830415+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:28.830589+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:29.830698+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:30.830864+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 4381 writes, 20K keys, 4381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4381 writes, 395 syncs, 11.09 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:31.831043+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:32.831243+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:33.831362+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:34.831512+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:35.831671+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:36.831882+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:37.832107+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:38.832313+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:39.832521+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:40.832756+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:41.833085+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:42.833538+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:43.833784+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:44.833971+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:45.834114+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:46.834279+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:47.834409+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:48.834591+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:23 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:23 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x5590573d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404985 data_alloc: 218103808 data_used: 36864
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:49.834776+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61440000 unmapped: 368640 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 20:57:23 compute-0 ceph-osd[90092]: osd.1 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:50.835026+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 2015232 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:51.835280+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 2039808 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: tick
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Nov 25 20:57:23 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:52.835541+0000)
Nov 25 20:57:23 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 1916928 heap: 63905792 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:23 compute-0 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Nov 25 20:57:23 compute-0 podman[284869]: 2025-11-25 20:57:23.96978764 +0000 UTC m=+0.066497575 container health_status df7f01692742aff180dcac9e928a546d1788595f5a5934b36bfa63670848132b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:57:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 20:57:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400922806' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3410680739' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/359322773' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/450518948' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1682775689' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/239104025' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2400922806' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 20:57:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327410073' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1573: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:24 compute-0 rsyslogd[1006]: imjournal from <np0005535736:ceph-osd>: begin to drop messages due to rate-limiting
Nov 25 20:57:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 20:57:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275420128' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 20:57:24 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:57:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 25 20:57:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056804668' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 20:57:24 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 25 20:57:24 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017305912' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 20:57:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439468682' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2327410073' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: pgmap v1573: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/275420128' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2056804668' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3017305912' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1439468682' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 25 20:57:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1360306586' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 25 20:57:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661175214' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 25 20:57:25 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2509062030' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 20:57:25 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:25 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:25 compute-0 podman[285148]: 2025-11-25 20:57:25.971389392 +0000 UTC m=+0.069522727 container health_status 06c1451d9c1baa88b384dac634b09792e838458f82c4d732b703066a8897977d (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:57:25 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1360306586' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2661175214' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2509062030' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mon[75144]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mon[75144]: from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1574: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14682 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 20:57:26 compute-0 ceph-mgr[75443]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 20:57:27 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14690 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mon[75144]: pgmap v1574: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:27 compute-0 ceph-mon[75144]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mon[75144]: from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mon[75144]: from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 25 20:57:27 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831257604' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14694 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 25 20:57:27 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475706050' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 20:57:27 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14698 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 20:57:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135129422' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: from='client.14690 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1831257604' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: from='client.14694 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2475706050' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3135129422' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14702 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1575: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:06.293495+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:07.293656+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:08.293774+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:09.294021+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:10.294160+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:11.294318+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:12.295050+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:13.295237+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:14.295374+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:15.295547+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:16.295720+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:17.295872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:18.296365+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:19.296630+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:20.296856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:21.297091+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:22.297229+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:23.297388+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:24.297517+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:25.297652+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:26.297815+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:27.297950+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:28.298079+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:29.298252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:30.298477+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:31.298661+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:32.298782+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:33.298981+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:34.299124+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:35.299340+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:36.299476+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:37.299608+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:38.299866+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:39.300036+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:40.300209+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:41.300434+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:42.300605+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:43.300745+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:44.300877+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:45.301045+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:46.301215+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:47.301365+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:48.301533+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:49.301658+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:50.301845+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:51.302021+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:52.302210+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:53.302360+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:54.302499+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:55.302624+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:56.302846+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:57.302987+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:58.303162+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:23:59.303266+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:00.374286+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:01.374455+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:02.374889+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:03.375098+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:04.375253+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:05.375406+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:06.375556+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:07.375720+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:08.375888+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:09.376036+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:10.376190+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:11.376395+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:12.376585+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:13.376724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:14.376888+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:15.377099+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:16.377259+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:17.377399+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:18.394915+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:19.395087+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:20.398006+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 204800 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:21.398157+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:22.398318+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:23.439618+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:24.439786+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:25.439885+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:26.440411+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:27.440710+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:28.440912+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:29.441059+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:30.441201+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:31.441344+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:32.441470+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:33.441613+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:34.441746+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:35.441915+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:36.442082+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:37.442365+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:38.442540+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:39.442687+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:40.442981+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:41.443925+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:42.444123+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:43.444228+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:44.444506+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:45.444660+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:46.444864+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:47.445007+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:48.445227+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:49.445434+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:50.445584+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:51.445861+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:52.446040+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:53.446206+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:54.446443+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:55.446619+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:56.446829+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:57.447075+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:58.447421+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:24:59.447592+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:00.447958+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:01.448405+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:02.448856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:03.448966+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:04.449136+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:05.449271+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:06.449422+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:07.449863+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:08.450096+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:09.450245+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:10.450398+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:11.450612+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:12.450754+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:13.451011+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:14.451229+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:15.451429+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:16.451659+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:17.451861+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:18.452072+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:19.452231+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:20.452400+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:21.452583+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:22.452724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:23.452851+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:24.452972+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:25.453121+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:26.453311+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:27.453558+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:28.453715+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:29.453885+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:30.453991+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:31.454134+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:32.454261+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:33.454415+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:34.454546+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:35.454735+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:36.454887+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:37.455050+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:38.455162+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:39.455322+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:40.455424+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:41.455624+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:42.455764+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:43.455934+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:44.456094+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:45.456249+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:46.456405+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:47.456548+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:48.456739+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:49.456901+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:50.457067+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:51.457263+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:52.457396+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:53.457592+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:54.457724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:55.457863+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:56.458034+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:57.458185+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:58.458380+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:25:59.458532+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:00.458713+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:01.458952+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:02.459090+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:03.459257+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:04.459422+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:05.459568+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:06.459738+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:07.459891+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:08.460026+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:09.460203+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:10.460351+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:11.460568+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 188416 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:12.460749+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:13.460904+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:14.461082+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:15.461208+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:16.461344+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:17.461645+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:18.461856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:19.462035+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:20.462204+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:21.462428+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:22.462583+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:23.462724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:24.462859+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:25.463001+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 196608 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d35483090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d354831f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:26.463151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:27.463305+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:28.463439+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:29.463547+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:30.463680+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:31.463881+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:32.464008+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:33.464158+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:34.464319+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:35.464472+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:36.464691+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:37.465131+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:38.465534+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:39.465712+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:40.465942+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:41.466091+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:42.466395+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:43.466525+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:44.466881+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:45.467131+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:46.467421+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:47.467652+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:48.467850+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:49.468036+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:50.468205+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:51.468403+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:52.468631+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:53.468844+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:54.469072+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:55.469237+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:56.469425+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:57.469588+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:58.469720+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:26:59.469853+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:00.965893+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:01.966151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:02.966437+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:03.966575+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:04.966739+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:05.966930+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:06.967105+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:07.967271+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:08.967468+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:09.967689+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:10.967915+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:11.968160+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:12.968335+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:13.968604+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:14.968770+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:15.968916+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:16.969221+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:17.969385+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:18.969600+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:19.969754+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:20.969932+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:21.970221+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:22.970367+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:23.970539+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:24.970719+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:25.970924+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 155648 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:26.971046+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:27.971227+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:28.971428+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:29.971699+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:30.971917+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:31.972105+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:32.972291+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:33.972550+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:34.972729+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:35.972863+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:36.973030+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:37.973244+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:38.973406+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:39.973621+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:40.973918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:41.974200+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:42.974420+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:43.974567+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:44.974739+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:45.974864+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:46.974987+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:47.975142+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:48.975374+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:49.975572+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:50.975740+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:51.975948+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:52.976109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:53.976272+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:54.976426+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:55.976594+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:56.976770+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:57.976963+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:58.977181+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:27:59.977372+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:00.977570+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:01.977779+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:02.977968+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:03.978155+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:04.978419+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:05.978699+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:06.978845+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:07.979024+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:08.979189+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:09.979326+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:10.979536+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:11.979867+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:12.980151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:13.980337+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:14.980526+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:15.980745+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:16.980956+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:17.981125+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:18.981332+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:19.981663+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:20.981898+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:21.982178+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:22.982360+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:23.982532+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:24.982768+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:25.982968+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:26.983124+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:27.983244+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:28.983420+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:29.983571+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:30.983769+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:31.983994+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:32.984193+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:33.984370+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:34.984605+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:35.984868+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:36.985038+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:37.985979+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:38.986268+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:39.986514+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:40.986737+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:41.987000+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:42.987202+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:43.987487+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:44.987698+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:45.988118+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:46.988352+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:47.988503+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:48.988857+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:49.989089+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:50.989335+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:51.989716+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:52.989946+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:53.990187+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:54.990351+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:55.990599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:56.990873+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:57.991233+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:58.991389+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:28:59.991607+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:00.991784+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:01.992014+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:02.992252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:03.992376+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:04.992543+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:05.992727+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:06.992932+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:07.993105+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:08.993280+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:09.993641+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:10.993906+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:11.994153+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:12.994389+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:13.994571+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:14.994847+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:15.995003+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:16.995223+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:17.995391+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:18.995557+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:19.995831+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:20.996071+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:21.996313+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:22.996488+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:23.996676+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:24.996864+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:25.997099+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:26.997293+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:27.997463+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:28.997635+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:29.997891+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:30.998088+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:31.998306+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:32.998460+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:33.998733+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:34.998996+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:35.999283+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:36.999828+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:38.000176+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:39.000497+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:40.000724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:41.001008+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:42.001325+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:43.001594+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:44.001902+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:45.002159+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:46.002413+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:47.002659+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:48.002934+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:49.003136+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:50.003419+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:51.003688+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:52.004019+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:53.004274+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:54.004536+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:55.004753+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:56.004966+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:57.005142+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:58.005306+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:29:59.005536+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:00.005703+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:01.170949+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:02.171236+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:03.171704+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:04.171924+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:05.172126+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:06.172358+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:07.172616+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:08.172828+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:09.173020+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:10.173204+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:11.173329+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:12.173582+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:13.173760+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:14.173957+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:15.174123+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:16.174325+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:17.174506+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:18.174676+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:19.174890+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:20.174999+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:21.175151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:22.175372+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:23.175553+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:24.175748+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:25.175939+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:26.176093+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:27.176221+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:28.176368+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:29.176540+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:30.176720+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:31.176942+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:32.177199+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:33.177373+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:34.177567+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:35.177760+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:36.177922+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:37.178141+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:38.178320+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:39.178554+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:40.178707+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:41.178910+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:42.179094+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:43.179258+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:44.179437+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:45.179723+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:46.179939+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:47.180142+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:48.180325+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:49.180481+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:50.180627+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:51.180915+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:52.181860+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:53.183534+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:54.185098+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:55.188041+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:56.190687+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:57.193038+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:58.193886+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:30:59.194467+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:00.195588+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:01.195759+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:02.196334+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:03.196748+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:04.197057+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:05.197242+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:06.197441+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:07.197744+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:08.197864+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:09.198059+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:10.198202+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:11.198356+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:12.198666+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:13.199018+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:14.199217+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:15.199424+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:16.199567+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:17.199773+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:18.199882+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:19.200020+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:20.200252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:21.200472+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:22.200697+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:23.200904+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:24.201080+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:25.201232+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:26.201401+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:27.201555+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:28.201748+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:29.201918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:30.202131+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:31.202280+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:32.202486+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:33.202672+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:34.203061+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:35.203252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:36.203409+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:37.203558+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:38.203761+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:39.204010+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:40.204232+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:41.204431+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:42.204691+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:43.204856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:44.205078+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:45.205300+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:46.205455+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:47.205594+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 25 20:57:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238814655' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:48.205749+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:49.205908+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:50.206078+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:51.206255+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:52.206475+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:53.206610+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:54.206791+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:55.206996+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:56.207151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:57.209308+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:58.209608+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:31:59.210029+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:00.210240+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:01.210388+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:02.210618+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:03.213584+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:04.213872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:05.216328+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:06.216601+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:07.218390+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:08.218606+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:09.222971+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:10.223236+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:11.224291+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:12.224599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:13.225184+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:14.225472+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:15.226109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:16.226323+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:17.226561+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:18.226784+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:19.227293+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:20.227661+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:21.227873+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:22.228062+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:23.228207+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:24.228398+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:25.228646+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:26.228763+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:27.228914+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:28.229094+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:29.229269+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:30.229467+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:31.229689+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:32.229918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:33.230044+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:34.230208+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:35.230389+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:36.230560+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:37.230733+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:38.230951+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:39.231103+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:40.231284+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:41.231457+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:42.231679+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:43.231896+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:44.232032+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:45.232151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:46.232339+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:47.232519+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:48.232735+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:49.232914+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:50.233092+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:51.233279+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:52.233508+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:53.233647+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:54.233877+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:55.234036+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:56.234206+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:57.234421+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:58.234627+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:32:59.235006+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:00.235252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:01.235456+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:02.235702+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:03.235947+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:04.236255+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:05.236759+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:06.236998+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:07.237362+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:08.237547+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:09.237864+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:10.238025+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:11.238345+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:12.238617+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:13.238872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:14.239023+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:15.239227+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:16.239438+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:17.239656+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:18.239888+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:19.240092+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:20.240253+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:21.240448+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:22.240720+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:23.240907+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:24.241078+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:25.241345+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:26.241524+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:27.241763+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:28.241988+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:29.242192+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:30.242402+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:31.242604+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:32.242788+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:33.243549+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:34.243721+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:35.243893+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:36.244060+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:37.244435+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:38.244602+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:39.244907+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:40.245092+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:41.245282+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:42.245498+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:43.245677+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:44.245874+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:45.246056+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:46.246190+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:47.246427+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:48.246658+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:49.246918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:50.247112+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:51.247281+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:52.247510+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:53.247674+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:54.248115+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:55.248291+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:56.248480+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:57.248640+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:58.248847+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:33:59.249023+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:00.249196+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:01.249363+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:02.249571+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:03.249642+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:04.249791+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:05.250012+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:06.251590+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:07.253721+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:08.254533+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:09.254695+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:10.255255+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:11.255404+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:12.255546+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:13.256364+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:14.257029+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:15.257512+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:16.257953+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:17.258326+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:18.258496+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:19.258753+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:20.259021+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:21.259225+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:22.259514+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:23.259697+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:24.260034+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:25.260220+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:26.260510+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:27.260872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:28.261041+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:29.261228+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:30.261504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:31.261909+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:32.262190+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:33.262333+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:34.262569+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:35.262884+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:36.263191+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:37.263499+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:38.263704+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:39.263869+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:40.264001+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:41.264168+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:42.264474+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:43.264630+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:44.264778+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:45.264973+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:46.265126+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:47.265336+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:48.265539+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:49.265734+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:50.265960+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:51.266132+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:52.266430+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:53.266565+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:54.266760+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:55.267013+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:56.267171+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:57.267385+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:58.267620+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:34:59.267885+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:00.268151+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:01.268422+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:02.268717+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:03.268902+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:04.269154+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:05.269376+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:06.269644+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:07.269903+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:08.270135+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:09.270332+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:10.270552+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:11.270710+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:12.271357+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:13.271650+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:14.272276+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:15.273009+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:16.273525+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:17.273996+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:18.274238+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:19.274497+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:20.274949+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:21.275133+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:22.275507+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:23.275762+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:24.276076+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:25.276286+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:26.276456+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:27.276599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:28.276788+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:29.277020+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:30.277174+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:31.277343+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:32.277554+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:33.277728+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:34.277875+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:35.278186+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:36.278457+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:37.278700+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:38.278954+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:39.279259+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:40.279583+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:41.279891+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:42.280181+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:43.280316+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:44.280438+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:45.280587+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:46.280660+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:47.280741+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:48.280881+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:49.281044+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:50.281198+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:51.281369+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:52.281561+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:53.282371+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:54.282525+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:55.282730+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:56.282927+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:57.283062+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:58.283230+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:35:59.283386+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:00.283526+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:01.283670+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:02.283857+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:03.284096+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:04.284252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:05.284433+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:06.284598+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:07.284739+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:08.284925+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:09.285109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:10.285292+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:11.285450+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:12.285651+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:13.285909+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:14.286065+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:15.286305+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:16.287630+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:17.287865+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:18.288577+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:19.289791+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:20.290489+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:21.290774+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:22.291573+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:23.292005+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:24.292485+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:25.292669+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:26.292910+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:27.293378+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:28.293772+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:29.294174+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:30.294475+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:31.294733+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: mgrc ms_handle_reset ms_handle_reset con 0x558d36cec800
Nov 25 20:57:28 compute-0 ceph-osd[89084]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/446496168
Nov 25 20:57:28 compute-0 ceph-osd[89084]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/446496168,v1:192.168.122.100:6801/446496168]
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: get_auth_request con 0x558d38dfe800 auth_method 0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: mgrc handle_mgr_configure stats_period=5
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:32.294989+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 90112 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:33.295199+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 90112 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 ms_handle_reset con 0x558d36ced800 session 0x558d3625d4a0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: handle_auth_request added challenge on 0x558d36ced000
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 ms_handle_reset con 0x558d370ca000 session 0x558d377c8d20
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: handle_auth_request added challenge on 0x558d36ced800
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 ms_handle_reset con 0x558d36e96c00 session 0x558d3625dc20
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: handle_auth_request added challenge on 0x558d38e0e400
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:34.295400+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:35.295751+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:36.296026+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:37.296254+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:38.296434+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:39.296647+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:40.296861+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:41.297028+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:42.297273+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:43.297495+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:44.297859+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:45.298084+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:46.298311+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:47.298543+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:48.298701+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:49.298914+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:50.299101+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:51.299293+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:52.299500+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:53.299715+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:54.299908+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:55.300109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:56.300316+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:57.300544+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:58.300733+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:36:59.305565+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:00.305718+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:01.305884+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:02.306056+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:03.306200+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:04.306357+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:05.306504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:06.306689+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:07.306880+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:08.307095+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:09.307389+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:10.307640+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:11.307928+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:12.308216+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:13.308591+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:14.308939+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:15.309177+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:16.309454+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:17.309635+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:18.309832+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:19.310034+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:20.310231+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:21.310432+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:22.310662+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:23.310950+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:24.311181+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:25.311460+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:26.311723+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:27.311972+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:28.312110+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:29.312245+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:30.312465+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:31.312674+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:32.312872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:33.313112+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:34.313387+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:35.313682+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:36.313965+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:37.314259+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:38.314482+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:39.314677+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:40.314856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:41.315068+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:42.315258+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:43.315422+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:44.315580+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:45.315875+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:46.316038+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:47.316233+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:48.316465+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:49.316692+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:50.316921+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:51.317109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:52.317308+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:53.317461+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:54.317727+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:55.317957+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:56.318247+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:57.318412+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:58.318631+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:37:59.318864+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:00.319022+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:01.319171+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:02.319374+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:03.319551+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:04.319751+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:05.319962+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:06.320185+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:07.320353+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:08.320541+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:09.320776+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:10.321249+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:11.321430+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:12.321620+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:13.321881+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:14.322086+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:15.322231+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:16.322376+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:17.322536+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:18.322754+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:19.322917+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:20.323052+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:21.323236+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:22.323436+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:23.323619+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:24.323792+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:25.323960+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 81920 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:26.324122+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:27.324293+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:28.324476+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:29.324663+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:30.324911+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:31.325065+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:32.325262+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:33.325480+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:34.325697+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:35.325878+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:36.325974+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:37.326150+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:38.326300+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:39.326464+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:40.326643+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:41.326890+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:42.327073+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:43.327232+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:44.327584+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:45.327862+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:46.328048+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:47.328199+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:48.328366+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:49.328549+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:50.328724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:51.328870+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:52.329061+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:53.329214+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:54.329414+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:55.329581+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:56.329657+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:57.329743+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:58.329873+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:38:59.329993+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:00.330166+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:01.330375+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:02.330608+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:03.330890+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:04.331129+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:05.331294+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:06.331482+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:07.331679+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:08.331884+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:09.332062+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:10.332164+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:11.332302+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:12.332507+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:13.332656+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:14.332851+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:15.333074+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:16.333273+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:17.333452+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:18.333621+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:19.333856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:20.334049+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:21.334232+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:22.334440+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:23.335159+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:24.335262+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:25.335415+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:26.335633+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:27.335791+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:28.336404+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:29.337540+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:30.337866+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:31.338062+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:32.338279+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:33.338619+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:34.338976+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:35.339194+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:36.340016+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:37.342446+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:38.343081+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:39.344529+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:40.344756+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:41.346963+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:42.347449+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:43.348501+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:44.348914+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:45.349360+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:46.349943+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:47.350122+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:48.350508+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:49.350828+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:50.351073+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:51.351396+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:52.351657+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:53.351959+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:54.352286+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:55.352654+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:56.357417+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:57.357592+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:58.357760+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:39:59.357933+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:00.358121+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:01.358227+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:02.358381+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:03.358569+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:04.358955+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:05.359159+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:06.359340+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:07.359515+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:08.359686+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:09.359869+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:10.360028+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:11.360202+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:12.360398+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:13.360540+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:14.360756+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:15.360926+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:16.361084+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:17.361234+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:18.361376+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:19.361565+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:20.361697+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:21.361853+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:22.362004+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:23.362141+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:24.362321+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:25.362538+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:26.362711+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:27.362882+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:28.363027+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:29.363171+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:30.363353+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:31.363540+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:32.363750+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:33.363899+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:34.364102+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:35.364236+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:36.364420+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:37.364603+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:38.367018+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:39.367208+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:40.367424+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:41.367673+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:42.367939+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:43.368144+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:44.368301+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:45.368459+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:46.368652+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:47.368857+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:48.369046+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:49.369213+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:50.369391+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:51.369612+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:52.369898+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:53.370088+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:54.370403+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:55.370576+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:56.370769+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:57.370984+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:58.371181+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:40:59.371369+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:00.371561+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:01.371775+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:02.371998+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:03.372133+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:04.372265+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:05.372436+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:06.372612+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:07.372861+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:08.373214+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:09.373370+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:10.373525+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:11.373722+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:12.373917+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:13.374085+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:14.374257+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:15.374437+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:16.374624+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:17.374772+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:18.375017+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:19.375337+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:20.375525+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:21.375703+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:22.375875+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:23.376021+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:24.376213+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:25.376430+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:26.376592+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 114688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:27.376836+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:28.377002+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:29.377183+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:30.377394+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:31.377605+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:32.377789+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:33.377989+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:34.378194+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:35.378411+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:36.378579+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:37.378875+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:38.379078+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:39.379212+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:40.379437+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:41.379559+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:42.379737+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:43.379932+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:44.380081+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:45.380273+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:46.387671+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:47.387912+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:48.388195+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:49.388504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:50.388727+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:51.389019+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:52.389360+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:53.389618+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:54.389867+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:55.390120+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:56.390367+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:57.390546+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:58.390852+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:41:59.391131+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:00.391614+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:01.391919+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:02.392184+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:03.392387+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:04.392610+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:05.392861+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:06.393103+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:07.393264+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:08.393452+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:09.393654+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:10.393885+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:11.394114+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:12.394368+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:13.394525+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:14.394751+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:15.394987+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:16.395224+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:17.395441+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:18.395620+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:19.395827+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:20.396727+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:21.396919+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:22.397146+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:23.397343+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:24.397579+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:25.397856+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:26.398049+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:27.398245+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:28.398438+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:29.398700+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:30.398874+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:31.399029+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:32.399251+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:33.399387+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:34.399618+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:35.399853+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:36.400020+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:37.408828+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:38.409036+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:39.409203+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:40.409437+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:41.409627+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:42.409878+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:43.410076+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:44.410263+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:45.410537+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:46.410857+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:47.411063+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:48.411221+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:49.411473+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:50.411784+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:51.412077+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:52.414849+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:53.420304+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:54.420470+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:55.420688+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:56.420909+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:57.421065+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:58.421243+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:42:59.421449+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:00.421615+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:01.421759+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:02.422009+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:03.422189+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:04.422399+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:05.422567+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:06.422695+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:07.422883+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:08.423080+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:09.423240+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:10.423387+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 106496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:11.423523+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:12.423699+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:13.423863+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:14.424019+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:15.424205+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:16.424411+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:17.424599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:18.424767+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:19.424921+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:20.425123+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:21.425283+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:22.425478+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:23.425633+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:24.425892+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:25.426072+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:26.426236+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:27.426428+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:28.426618+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:29.426933+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:30.427111+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:31.427300+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:32.427465+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:33.427654+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:34.427894+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:35.428046+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:36.428195+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:37.428439+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:38.428638+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:39.429138+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:40.429325+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:41.429489+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:42.429643+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:43.429863+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:44.430078+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:45.430202+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:46.430354+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:47.430539+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:48.430747+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:49.430935+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:50.431097+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:51.431261+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:52.431453+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:53.431641+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:54.431878+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:55.432062+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:56.432258+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:57.432418+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:58.432598+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:43:59.432758+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:00.432939+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:01.433060+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:02.433251+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:03.433390+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:04.433550+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:05.433708+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:06.433902+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:07.434081+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:08.434189+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:09.434395+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:10.434530+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:11.434676+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:12.434916+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:13.435070+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:14.435270+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:15.435445+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:16.435664+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:17.435836+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:18.436001+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:19.436244+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:20.436465+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:21.436638+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:22.437063+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:23.437274+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:24.437412+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:25.437784+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:26.438057+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:27.438484+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:28.438857+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:29.439353+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:30.439609+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:31.439761+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:32.440084+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:33.440380+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:34.440652+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:35.440901+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:36.441204+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:37.441481+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:38.441860+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:39.442103+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:40.442372+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:41.442547+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:42.442943+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:43.443218+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:44.443439+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:45.443673+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:46.443904+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:47.444132+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:48.444407+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:49.444587+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:50.444882+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:51.445103+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:52.445361+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:53.445670+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:54.445897+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:55.446073+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:56.446413+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:57.446711+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:58.446985+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:44:59.447224+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:00.447447+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:01.447666+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:02.448121+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:03.448462+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:04.448769+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:05.449049+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:06.449228+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:07.449379+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:08.449607+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:09.449777+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:10.450015+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:11.450168+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:12.450367+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:13.450548+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:14.450696+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:15.450891+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:16.451143+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:17.451374+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:18.451552+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:19.451897+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:20.452155+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:21.452466+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:22.452667+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:23.453039+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:24.453271+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:25.453488+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:26.453749+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:27.453997+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:28.454238+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:29.454467+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:30.454641+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:31.454872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:32.455097+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:33.455335+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:34.455567+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:35.455771+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:36.456067+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:37.456322+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:38.456555+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:39.456866+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:40.457142+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:41.457345+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 122880 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:42.457611+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:43.457849+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:44.458280+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:45.458584+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:46.458770+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:47.458970+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:48.459214+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:49.459504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:50.460221+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:51.460516+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:52.460895+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:53.461222+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:54.461407+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:55.462920+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:56.467103+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:57.471308+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:58.471611+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:45:59.474876+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:00.477633+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:01.479393+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:02.480172+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:03.482126+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:04.483948+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:05.485321+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:06.485959+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:07.487133+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:08.487457+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:09.487743+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:10.488114+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:11.488382+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:12.489280+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:13.490038+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:14.490217+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:15.490381+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:16.490540+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:17.490732+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:18.490865+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:19.491109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:20.491216+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:21.491365+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:22.491589+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:23.491948+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:24.492134+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:25.492302+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:26.492491+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:27.492657+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:28.492862+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:29.493031+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:30.493260+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:31.493427+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:32.493663+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:33.493854+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:34.494043+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:35.494189+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:36.494363+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:37.494550+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:38.494739+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:39.494931+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:40.495059+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:41.495261+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:42.495476+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:43.495630+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 147456 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:44.495855+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:45.495996+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:46.496135+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:47.496278+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:48.496433+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:49.496547+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:50.496699+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:51.496879+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:52.497105+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:53.497339+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:54.497508+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:55.497686+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:56.497865+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:57.498042+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:58.498213+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:46:59.498370+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:00.498517+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:01.498691+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:02.499605+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:03.499762+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:04.499873+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:05.500029+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:06.500175+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:07.500318+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:08.500484+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:09.500707+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:10.500930+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:11.501053+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:12.501286+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:13.501497+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:14.501680+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:15.501872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:16.502080+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:17.502234+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:18.502394+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:19.502608+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:20.502842+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:21.503051+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:22.503341+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:23.503510+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:24.503657+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:25.504233+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:26.504389+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:27.504547+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:28.504724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:29.504894+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:30.505048+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:31.505209+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:32.505434+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:33.505589+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:34.506767+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:35.506895+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:36.507035+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:37.507180+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:38.507293+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:39.507416+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:40.507548+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:41.507698+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:42.507872+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:43.508034+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:44.525193+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:45.525365+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:46.525570+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:47.525750+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:48.525858+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:49.526096+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:50.526302+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:51.526579+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:52.526761+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:53.526974+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:54.527177+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:55.527413+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:56.527713+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:57.527915+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:58.528046+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:47:59.528230+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:00.528406+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:01.528592+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:02.528933+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:03.529109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:04.529315+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:05.529504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:06.529709+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:07.529905+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:08.530074+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:09.530266+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:10.530448+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:11.530662+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:12.530881+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:13.531009+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:14.531191+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:15.531379+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 139264 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:16.531581+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:17.531745+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:18.531954+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:19.532111+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:20.532330+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:21.532477+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:22.532676+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:23.532878+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:24.533052+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:25.533250+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:26.533476+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:27.533645+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:28.533843+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:29.534021+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:30.534203+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:31.534374+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:32.534641+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:33.534896+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:34.535114+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:35.535283+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:36.535435+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:37.535579+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:38.535693+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:39.535851+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:40.535991+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:41.536152+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:42.536327+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:43.536532+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:44.536848+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:45.537103+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:46.537319+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:47.537489+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:48.537716+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:49.537924+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:50.538191+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:51.538372+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:52.538641+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:53.538918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:54.539096+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:55.539299+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:56.539526+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:57.539722+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:58.539917+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:48:59.540134+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:00.540322+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:01.540603+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:02.540922+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:03.541156+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:04.541377+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:05.541569+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:06.541933+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:07.542107+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:08.542263+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:09.542511+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:10.542710+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:11.542871+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:12.543096+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:13.543278+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:14.543493+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:15.543676+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:16.543861+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:17.544020+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:18.544219+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:19.544408+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:20.544620+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:21.544911+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:22.545144+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:23.545396+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:24.545757+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:25.546155+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:26.546328+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:27.546488+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:28.546665+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:29.546860+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:30.547024+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:31.547190+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:32.547402+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:33.547576+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:34.547744+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:35.547910+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:36.548084+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:37.548252+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:38.548429+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:39.548610+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:40.550348+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:41.558041+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:42.559024+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:43.560312+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:44.560700+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:45.560970+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:46.561452+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:47.561607+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:48.562109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:49.562897+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:50.563198+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:51.563381+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:52.563844+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:53.564346+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:54.564922+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:55.565470+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:56.566993+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:57.567554+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:58.568163+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:49:59.568984+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:00.569264+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:01.569919+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:02.570197+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:03.570747+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:04.571298+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:05.571764+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:06.572239+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:07.572526+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:08.572953+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:09.573360+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:10.573899+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:11.574339+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:12.574652+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:13.574952+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:14.575289+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:15.575563+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:16.575877+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:17.576130+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:18.576422+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:19.576670+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:20.576908+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:21.577095+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:22.577361+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:23.577608+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:24.577879+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:25.578113+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:26.578301+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:27.578517+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:28.578765+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:29.579045+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:30.579311+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:31.579517+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:32.579834+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:33.580101+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:34.580369+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:35.580593+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:36.580920+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:37.581203+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:38.581454+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:39.581787+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:40.582007+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:41.582198+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:42.582633+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:43.582931+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:44.583613+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:45.584557+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:46.585667+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:47.586256+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:48.586569+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:49.587119+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:50.587566+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:51.588196+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:52.588504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:53.588746+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:54.589045+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:55.589283+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:56.589643+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:57.589961+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:58.590190+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:50:59.590428+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:00.590706+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:01.590935+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:02.591164+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:03.591373+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:04.591580+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:05.591874+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:06.592048+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:07.592212+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:08.592361+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:09.592516+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:10.592652+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:11.592882+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:12.593121+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:13.593330+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:14.593479+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:15.593673+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:16.593911+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:17.594184+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:18.594359+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:19.594529+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:20.594654+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:21.594750+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:22.594980+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:23.595139+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:24.595288+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:25.595422+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets getting new tickets!
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:26.595732+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _finish_auth 0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:26.597174+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:27.595918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:28.596085+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:29.596277+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:30.596473+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:31.596622+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:32.596834+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:33.596977+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 ms_handle_reset con 0x558d36ced000 session 0x558d377c8960
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: handle_auth_request added challenge on 0x558d38e0e000
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 ms_handle_reset con 0x558d36ced800 session 0x558d36ece1e0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: handle_auth_request added challenge on 0x558d36ced000
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 ms_handle_reset con 0x558d38e0e400 session 0x558d36aea000
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: handle_auth_request added challenge on 0x558d38e0ec00
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:34.597157+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:35.597285+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:36.597476+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:37.597636+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:38.597845+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:39.598000+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:40.598173+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:41.598400+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:42.598617+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:43.598767+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:44.598959+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:45.599108+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:46.599292+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:47.599479+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:48.599639+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:49.599943+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:50.600106+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:51.600756+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:52.601154+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:53.601353+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:54.601607+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:55.601914+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:56.602203+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:57.602409+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:58.602604+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:51:59.602775+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:00.603007+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 163840 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:01.603185+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:02.603400+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:03.603571+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:04.603774+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:05.603959+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:06.604109+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:07.604256+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:08.604391+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:09.604612+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:10.604773+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:11.604934+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:12.605097+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:13.605256+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:14.605428+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:15.605631+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:16.605899+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:17.606096+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:18.606281+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:19.606462+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:20.606624+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:21.606837+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:22.607000+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:23.607140+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:24.607338+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:25.607504+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:26.607691+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:27.607908+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:28.608077+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:29.608244+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:30.608408+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:31.608591+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:32.609094+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:33.609222+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:34.609446+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:35.609616+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:36.609753+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:37.609935+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:38.610114+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:39.610342+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:40.610565+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:41.610852+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:42.611100+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:43.611299+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:44.611446+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:45.611590+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:46.611724+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:47.611938+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:48.612128+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:49.612348+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:50.612544+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:51.612711+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:52.612917+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:53.613110+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:54.613337+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:55.613533+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:56.613733+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:57.613905+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:58.614048+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:52:59.614214+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:00.614419+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:01.705536+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:02.705766+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:03.705945+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:04.706158+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:05.706331+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:06.706457+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:07.706578+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:08.706728+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:09.706910+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:10.707059+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:11.707207+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:12.707394+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:13.707548+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:14.707702+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:15.707954+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:16.708104+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:17.708263+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:18.708415+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:19.708579+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:20.708788+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:21.709009+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:22.709225+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:23.709383+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:24.709544+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:25.709666+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:26.709904+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:27.710104+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:28.710302+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:29.710471+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:30.710778+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:31.711054+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:32.711327+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:33.711487+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:34.711715+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:35.711886+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:36.712090+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:37.712255+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:38.712456+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:39.712629+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:40.712880+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:41.713084+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:42.713350+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:43.713552+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:44.713753+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:45.713914+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:46.714108+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:47.714260+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:48.714446+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:49.714574+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:50.714737+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:51.714918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:52.715111+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:53.715317+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:54.715543+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:55.715871+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:56.715998+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:57.716589+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:58.717058+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:53:59.717269+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:00.717513+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:01.717740+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:02.718031+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:03.718358+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:04.718519+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:05.718699+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:06.718921+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:07.719147+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:08.719394+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:09.719599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:10.719784+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:11.719982+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:12.720224+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:13.720431+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:14.720640+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:15.720846+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:16.721069+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:17.721255+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:18.721476+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:19.721680+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:20.721903+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:21.722081+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:22.722289+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:23.722523+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:24.722749+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:25.722875+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:26.723048+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:27.723232+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:28.723413+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:29.723599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:30.724365+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:31.724570+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:32.724869+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:33.725069+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:34.725792+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:35.726300+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:36.726485+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:37.726736+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:38.727038+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:39.727668+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:40.727878+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:41.728121+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:42.728396+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:43.728553+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:44.729047+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:45.729350+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:46.729581+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:47.729858+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:48.730134+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:49.730342+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:50.730564+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:51.730855+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:52.731159+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:53.731320+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:54.731469+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:55.731689+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:56.731930+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:57.732122+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:58.732291+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:54:59.732483+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:00.732694+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:01.732843+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:02.733048+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:03.733180+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:04.733375+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:05.733552+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:06.733727+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:07.733918+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:08.734037+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:09.734181+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:10.734373+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:11.734534+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:12.734746+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:13.734912+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:14.735092+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:15.735270+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:16.735417+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:17.735563+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:18.735709+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:19.735850+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:20.736050+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:21.736220+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:22.736411+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:23.736555+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:24.736727+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:25.736887+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:26.737072+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:27.737248+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:28.737404+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:29.737592+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:30.737771+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:31.737865+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:32.738026+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:33.738173+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:34.738355+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:35.738520+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:36.738694+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:37.739061+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:38.739219+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:39.739443+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:40.739646+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:41.739886+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:42.740094+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:43.740299+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:44.740496+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:45.740681+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:46.740866+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:47.741047+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:48.741257+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:49.741398+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:50.741631+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:51.741782+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:52.742022+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:53.742306+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:54.742524+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:55.742855+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:56.743049+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:57.743220+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:58.743415+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:55:59.743623+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:00.743855+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:01.744082+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:02.744326+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:03.744529+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:04.744738+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:05.744923+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:06.745118+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:07.745283+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:08.745524+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:09.745716+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:10.745927+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:11.746160+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:12.746418+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:13.746611+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:14.746944+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:15.747167+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:16.747377+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:17.747575+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:18.747742+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:19.747955+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:20.748136+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:21.748313+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:22.748541+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:23.748749+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:24.748926+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:25.749106+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 4230 writes, 19K keys, 4230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4230 writes, 387 syncs, 10.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:26.749291+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:27.749452+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:28.749678+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:29.749883+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:30.750083+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:31.750282+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:32.750499+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:33.750667+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:34.750831+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:35.750946+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:36.751115+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:37.751274+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:38.751465+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:39.751599+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:40.751920+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:41.752051+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:42.752199+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 20:57:28 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:43.752362+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:44.752630+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:45.752763+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:46.752970+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:47.753113+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:48.753221+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:49.753333+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:50.753459+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:51.753579+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:52.753726+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:53.753910+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:54.754046+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 131072 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'config diff' '{prefix=config diff}'
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:55.754157+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 1015808 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'config show' '{prefix=config show}'
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:56.754290+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1695744 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x296a2/0x66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: tick
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_tickets
Nov 25 20:57:28 compute-0 ceph-osd[89084]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T20:56:57.754591+0000)
Nov 25 20:57:28 compute-0 ceph-osd[89084]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 1802240 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 20:57:28 compute-0 ceph-osd[89084]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 20:57:28 compute-0 ceph-osd[89084]: bluestore.MempoolThread(0x558d35561b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376985 data_alloc: 218103808 data_used: 16384
Nov 25 20:57:28 compute-0 ceph-osd[89084]: do_command 'log dump' '{prefix=log dump}'
Nov 25 20:57:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 25 20:57:29 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002856576' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 20:57:29 compute-0 ceph-mon[75144]: from='client.14702 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 20:57:29 compute-0 ceph-mon[75144]: pgmap v1575: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:29 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/238814655' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 20:57:29 compute-0 ceph-mon[75144]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 20:57:29 compute-0 ceph-mon[75144]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 20:57:29 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2002856576' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 20:57:29 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14712 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:29 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 25 20:57:29 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924671955' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 20:57:30 compute-0 ceph-mon[75144]: from='client.14712 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:30 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3924671955' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 20:57:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 25 20:57:30 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693074604' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 20:57:30 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1576: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 25 20:57:30 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616914209' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 20:57:30 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 20:57:31 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 25 20:57:31 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561842483' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 20:57:31 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/3693074604' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 20:57:31 compute-0 ceph-mon[75144]: pgmap v1576: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:31 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1616914209' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 20:57:31 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 20:57:31 compute-0 systemd[1]: Started Hostname Service.
Nov 25 20:57:31 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14722 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 25 20:57:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606767444' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 20:57:32 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/561842483' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 20:57:32 compute-0 ceph-mon[75144]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:32 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2606767444' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 20:57:32 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1577: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:32 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 25 20:57:32 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763349878' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 20:57:33 compute-0 podman[285931]: 2025-11-25 20:57:33.017900042 +0000 UTC m=+0.107294341 container health_status eac0695290830653924d5244d1992286ff7ff0f5e40549a2244d65538cd8838b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 20:57:33 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:33 compute-0 ceph-mon[75144]: pgmap v1577: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:33 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2763349878' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 20:57:33 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 25 20:57:33 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52598627' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 20:57:33 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14732 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:34 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14734 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:34 compute-0 ceph-mon[75144]: from='client.14728 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:34 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/52598627' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 20:57:34 compute-0 ceph-mon[75144]: from='client.14732 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:34 compute-0 ceph-mgr[75443]: log_channel(cluster) log [DBG] : pgmap v1578: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 25 20:57:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938062559' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 20:57:34 compute-0 ceph-mon[75144]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 25 20:57:34 compute-0 ceph-mon[75144]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210406089' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 20:57:35 compute-0 ceph-mon[75144]: from='client.14734 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:35 compute-0 ceph-mon[75144]: pgmap v1578: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 20:57:35 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/2938062559' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 20:57:35 compute-0 ceph-mon[75144]: from='client.? 192.168.122.100:0/1210406089' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: log_channel(audit) log [DBG] : from='client.14742 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.4371499967441557e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 20:57:35 compute-0 ceph-mgr[75443]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 20:57:35 compute-0 ceph-mon[75144]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
